From 08d2820e2e00305e092e56d1829244409da475a0 Mon Sep 17 00:00:00 2001
From: EMMOntoPy Developers Closed issues: Library for representing and working with ontologies in Python. EMMOntoPy is a Python package based on the excellent Owlready2, which provides a natural and intuitive representation of ontologies in Python. EMMOntoPy extends Owlready2 and adds additional functionality, like accessing entities by label, reasoning with FaCT++ and parsing logical expressions in Manchester syntax. It also includes a set of tools, like creating an ontology from an Excel sheet, generation of reference documentation of ontologies and visualisation of ontologies graphically. EMMOntoPy is freely available for on GitHub and on PyPI under the permissive open source BSD 3-Clause license. EMMOntoPy was originally developed to work effectively with the Elemental Multiperspective Material Ontology (EMMO) and EMMO-based domain ontologies. It has now two sub-packages, Owlready2, and thereby also EMMOntoPy, represents OWL classes and individuals in Python as classes and instances. OWL properties are represented as Python attributes. Hence, it provides a new dot notation for representing ontologies as valid Python code. The notation is simple and easy to understand and write for people with some knowledge of OWL and Python. Since Python is a versatile programming language, Owlready2 does not only allow for representation of OWL ontologies, but also to work with them programmatically, including interpretation, modification and generation. Some of the additional features provided by EMMOntoPy are are listed below: In Owlready2 ontological entities, like classes, properties and individuals are accessed by the name-part of their IRI (i.e. everything that follows after the final slash or hash in the IRI). This is very inconvenient for ontologies like EMMO or Wikidata, that identify ontological entities by long numerical names. For instance, the name-part of the IRI of the Atom class in EMMO is \u2018EMMO_eb77076b_a104_42ac_a065_798b2d2809ad\u2019, which is neither human readable nor easy to write. EMMOntoPy allows to access the entity via its label (or rather skos:prefLabel) \u2018Atom\u2019, which is much more user friendly. The Terse RDF Triple Language (Turtle) is a common syntax and file format for representing ontologies. EMMOntoPy adds support for reading and writing ontologies in turtle format. Owlready2 has only support for reasoning with HermiT and Pellet. EMMOntoPy adds additional support for the fast tableaux-based [FaCT++ reasoner] for description logics. Even though the Owlready2 dot notation is clear and easy to read and understand for people who know Python, it is a new syntax that may look foreign for people that are used to working with Prot\u00e9g\u00e9. EMMOntoPy provides support to parse and serialise logical expressions in Manchester syntax, making it possible to create tools that will be much more familiar to work with for people used to working with Prot\u00e9g\u00e9. EMMOntoPy provides a Python module for graphical visualisation of ontologies. This module allows to graphically represent not only the taxonomy, but also restrictions and logical constructs. The classes to include in the graph, can either be specified manually or inferred from the taxonomy (like all subclasses of a give class that are not a subclass of any class in a set of other classes). EMMOntoPy includes a small set of command-line tools implemented as Python scripts: - The Owlready2 documentation is a good starting point. The EMMOntoPy package also has its own dedicated documentation. This includes a few examples and demos: demo/vertical shows an example of how EMMO may be used to achieve vertical interoperability. The file define-ontology.py provides a good example for how an EMMO-based application ontology can be defined in Python. demo/horizontal shows an example of how EMMO may be used to achieve horizontal interoperability. This demo also shows how you can use EMMOntoPy to represent your ontology with the low-level metadata framework DLite. In addition to achieve interoperability, as shown in the demo, DLite also allow you to automatically generate C or Fortran code base on your ontology. examples/emmodoc shows how the documentation of EMMO is generated using the Install with: pdfLaTeX or XeLaTeX and the Java. Needed for reasoning. Optional Python packages: See docker-instructions.md for how to build a docker image. EMMOntoPy is maintained by EMMC-ASBL. It has mainly been developed by SINTEF, specifically: The EMMC-ASBL organization takes on the efforts of continuing and expanding on the efforts of the CSA. - MarketPlace; Grant Agreement No: 760173 - OntoTrans; Grant Agreement No: 862136 - BIG-MAP; Grant Agreement No: 957189 - OpenModel; Grant Agreement No: 953167 Full Changelog Closed issues: Merged pull requests: Full Changelog Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Full Changelog Merged pull requests: Full Changelog Full Changelog Closed issues: Merged pull requests: Full Changelog Fixed bugs: Closed issues: Merged pull requests: Full Changelog Fixed bugs: Merged pull requests: Full Changelog Fixed bugs: Closed issues: Merged pull requests: Full Changelog Fixed bugs: Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Fixed bugs: Closed issues: Merged pull requests: Full Changelog Implemented enhancements: Fixed bugs: Closed issues: Merged pull requests: Full Changelog Full Changelog Full Changelog Full Changelog Implemented enhancements: Fixed bugs: Closed issues: Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Fixed bugs: Closed issues: Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Implemented enhancements: Closed issues: Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Merged pull requests: Full Changelog Closed issues: Merged pull requests: Full Changelog Merged pull requests: Full Changelog Implemented enhancements: Merged pull requests: Full Changelog Full Changelog Full Changelog Closed issues: Full Changelog Closed issues: Merged pull requests: Full Changelog Closed issues: Merged pull requests: * This Changelog was automatically generated by github_changelog_generator Copyright 2019-2022 SINTEF Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. It is recommended to instead use the FaCT++ reaonser (now default). In a unix terminal (Linux) In PowerShell (Windows 10): To install EMMOntoPy package inside container: Allow for mounting of C: in Docker (as administrator). Docker (rightclick in system tray) -> Settings -> Shared Drives -> tick of C -> Apply. Run the following command in PowerShell: Content: Tool for checking that ontologies conform to EMMO conventions. Example of YAML configuration file provided with the Prints version of an ontology to standard output. This script uses RDFLib and the versionIRI tag of the ontology to infer the version. Warning Fails if ontology has no versionIRI tag. Tool for visualizing ontologies.
Changelog¶
-Unreleased changes (2024-05-25)¶
+Unreleased changes (2024-05-29)¶
diff --git a/latest/LICENSE/index.html b/latest/LICENSE/index.html
index a49d6746a..3557fe84d 100644
--- a/latest/LICENSE/index.html
+++ b/latest/LICENSE/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/emmopy/emmocheck/index.html b/latest/api_reference/emmopy/emmocheck/index.html
index 9dfcfae9f..3a54f3e8e 100644
--- a/latest/api_reference/emmopy/emmocheck/index.html
+++ b/latest/api_reference/emmopy/emmocheck/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/emmopy/emmopy/index.html b/latest/api_reference/emmopy/emmopy/index.html
index fbc88dfe1..7b00ad6b3 100644
--- a/latest/api_reference/emmopy/emmopy/index.html
+++ b/latest/api_reference/emmopy/emmopy/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/colortest/index.html b/latest/api_reference/ontopy/colortest/index.html
index e4d918a1e..28583bf19 100644
--- a/latest/api_reference/ontopy/colortest/index.html
+++ b/latest/api_reference/ontopy/colortest/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/excelparser/index.html b/latest/api_reference/ontopy/excelparser/index.html
index a19295b59..166c8d03c 100644
--- a/latest/api_reference/ontopy/excelparser/index.html
+++ b/latest/api_reference/ontopy/excelparser/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/factpluspluswrapper/factppgraph/index.html b/latest/api_reference/ontopy/factpluspluswrapper/factppgraph/index.html
index fe665940d..0c0404a4f 100644
--- a/latest/api_reference/ontopy/factpluspluswrapper/factppgraph/index.html
+++ b/latest/api_reference/ontopy/factpluspluswrapper/factppgraph/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/factpluspluswrapper/owlapi_interface/index.html b/latest/api_reference/ontopy/factpluspluswrapper/owlapi_interface/index.html
index bdd4a1a63..304d55519 100644
--- a/latest/api_reference/ontopy/factpluspluswrapper/owlapi_interface/index.html
+++ b/latest/api_reference/ontopy/factpluspluswrapper/owlapi_interface/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/factpluspluswrapper/sync_factpp/index.html b/latest/api_reference/ontopy/factpluspluswrapper/sync_factpp/index.html
index 612df5485..248d12b44 100644
--- a/latest/api_reference/ontopy/factpluspluswrapper/sync_factpp/index.html
+++ b/latest/api_reference/ontopy/factpluspluswrapper/sync_factpp/index.html
@@ -18,7 +18,7 @@
-
+
diff --git a/latest/api_reference/ontopy/graph/index.html b/latest/api_reference/ontopy/graph/index.html
index 7067701a8..ee22f30da 100644
--- a/latest/api_reference/ontopy/graph/index.html
+++ b/latest/api_reference/ontopy/graph/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/manchester/index.html b/latest/api_reference/ontopy/manchester/index.html
index 83c24f42e..df79816bf 100644
--- a/latest/api_reference/ontopy/manchester/index.html
+++ b/latest/api_reference/ontopy/manchester/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/nadict/index.html b/latest/api_reference/ontopy/nadict/index.html
index 8ea5c1c66..24ab9e22b 100644
--- a/latest/api_reference/ontopy/nadict/index.html
+++ b/latest/api_reference/ontopy/nadict/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/ontodoc/index.html b/latest/api_reference/ontopy/ontodoc/index.html
index e9750ee0c..ddb98275b 100644
--- a/latest/api_reference/ontopy/ontodoc/index.html
+++ b/latest/api_reference/ontopy/ontodoc/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/ontodoc_rst/index.html b/latest/api_reference/ontopy/ontodoc_rst/index.html
index 9e61337d8..fb45b7179 100644
--- a/latest/api_reference/ontopy/ontodoc_rst/index.html
+++ b/latest/api_reference/ontopy/ontodoc_rst/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/ontology/index.html b/latest/api_reference/ontopy/ontology/index.html
index c8f98786b..7c27e406e 100644
--- a/latest/api_reference/ontopy/ontology/index.html
+++ b/latest/api_reference/ontopy/ontology/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/patch/index.html b/latest/api_reference/ontopy/patch/index.html
index bef41a97b..770beff0e 100644
--- a/latest/api_reference/ontopy/patch/index.html
+++ b/latest/api_reference/ontopy/patch/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/testutils/index.html b/latest/api_reference/ontopy/testutils/index.html
index 4640afea7..761bc63e0 100644
--- a/latest/api_reference/ontopy/testutils/index.html
+++ b/latest/api_reference/ontopy/testutils/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/api_reference/ontopy/utils/index.html b/latest/api_reference/ontopy/utils/index.html
index 159cc7bb0..1db67a467 100644
--- a/latest/api_reference/ontopy/utils/index.html
+++ b/latest/api_reference/ontopy/utils/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/demo/horizontal/index.html b/latest/demo/horizontal/index.html
index 65ba86406..6ee5343d5 100644
--- a/latest/demo/horizontal/index.html
+++ b/latest/demo/horizontal/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/demo/index.html b/latest/demo/index.html
index 0a1777342..c1f8d91d8 100644
--- a/latest/demo/index.html
+++ b/latest/demo/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/demo/vertical/index.html b/latest/demo/vertical/index.html
index de0b2890f..d4242aca8 100644
--- a/latest/demo/vertical/index.html
+++ b/latest/demo/vertical/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/developers/release-instructions/index.html b/latest/developers/release-instructions/index.html
index bb63e0ea3..0e98b9fd6 100644
--- a/latest/developers/release-instructions/index.html
+++ b/latest/developers/release-instructions/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/developers/setup/index.html b/latest/developers/setup/index.html
index e176bd5a5..2f58efbaf 100644
--- a/latest/developers/setup/index.html
+++ b/latest/developers/setup/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/developers/testing/index.html b/latest/developers/testing/index.html
index 13e380a40..af6ca4e9e 100644
--- a/latest/developers/testing/index.html
+++ b/latest/developers/testing/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/docker-instructions/index.html b/latest/docker-instructions/index.html
index 50c6d41c8..9fd2cf900 100644
--- a/latest/docker-instructions/index.html
+++ b/latest/docker-instructions/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/emmodoc/classes/index.html b/latest/examples/emmodoc/classes/index.html
index bfab125c9..c1fb8c88c 100644
--- a/latest/examples/emmodoc/classes/index.html
+++ b/latest/examples/emmodoc/classes/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/emmodoc/emmo/index.html b/latest/examples/emmodoc/emmo/index.html
index 8a5902559..a7853557f 100644
--- a/latest/examples/emmodoc/emmo/index.html
+++ b/latest/examples/emmodoc/emmo/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/emmodoc/important_concepts/index.html b/latest/examples/emmodoc/important_concepts/index.html
index 3ce9069c6..677114dbd 100644
--- a/latest/examples/emmodoc/important_concepts/index.html
+++ b/latest/examples/emmodoc/important_concepts/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/emmodoc/index.html b/latest/examples/emmodoc/index.html
index cd2d9bb8f..7ccf880e6 100644
--- a/latest/examples/emmodoc/index.html
+++ b/latest/examples/emmodoc/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/emmodoc/introduction/index.html b/latest/examples/emmodoc/introduction/index.html
index e477f8a5a..bc42b80c6 100644
--- a/latest/examples/emmodoc/introduction/index.html
+++ b/latest/examples/emmodoc/introduction/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/emmodoc/relations/index.html b/latest/examples/emmodoc/relations/index.html
index 6dacd8835..d78b1c422 100644
--- a/latest/examples/emmodoc/relations/index.html
+++ b/latest/examples/emmodoc/relations/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/jupyter-visualization/index.html b/latest/examples/jupyter-visualization/index.html
index 922eee2ac..bda7ede15 100644
--- a/latest/examples/jupyter-visualization/index.html
+++ b/latest/examples/jupyter-visualization/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/examples/ontology-from-excel/index.html b/latest/examples/ontology-from-excel/index.html
index 855520c8d..7475f0705 100644
--- a/latest/examples/ontology-from-excel/index.html
+++ b/latest/examples/ontology-from-excel/index.html
@@ -20,7 +20,7 @@
-
+
diff --git a/latest/index.html b/latest/index.html
index 0e6f01f1b..373a46f30 100644
--- a/latest/index.html
+++ b/latest/index.html
@@ -18,7 +18,7 @@
-
+
diff --git a/latest/search/search_index.json b/latest/search/search_index.json
index b8384dece..a029dd376 100644
--- a/latest/search/search_index.json
+++ b/latest/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#emmontopy","title":"EMMOntoPy","text":"
ontopy
and emmopy
, where ontopy
is a general package to work with any OWL ontology, while emmopy
provides extra features that are specific to EMMO.ontoconvert
: Converts ontologies between different file formats. It also supports some additional transformation during conversion, like running a reasoner, merging several ontological modules together (squashing), rename IRIs, generate catalogue file and automatic annotation of entities with their source IRI. - ontograph
: Vertasile tool for visualising (parts of) an ontology, utilising the visualisation features mention above. - ontodoc
: Documents an ontology. - excel2onto
: Generate an EMMO-based ontology from an excel file. It is useful for domain experts with limited knowledge of ontologies and that are not used to tools like Prot\u00e9g\u00e9. - ontoversion
: Prints ontology version number. - emmocheck
: A small test framework for checking the consistency of EMMO and EMMO-based domain ontologies and whether they confirm to the EMMO conventions.
ttl
), and more).ontodoc
: A dedicated command line tool for this. You find it in the tools/ sub directory.Matter
in EMMO utilizing the emmopy.get_emmo
function:
"},{"location":"#documentation-and-examples","title":"Documentation and examples","text":"In [1]: from emmopy import get_emmo\n\nIn [2]: emmo = get_emmo()\n\nIn [3]: emmo.Matter\nOut[3]: physicalistic.Matter\n\nIn [4]: emmo.Matter.is_a\nOut[4]:\n[physicalistic.Physicalistic,\n physical.Physical,\n mereotopology.hasPart.some(physicalistic.Massive),\n physical.hasTemporalPart.only(physicalistic.Matter)]\n
"},{"location":"#installation","title":"Installation","text":"ontodoc
tool.
"},{"location":"#required-dependencies","title":"Required Dependencies","text":"pip install EMMOntoPy\n
"},{"location":"#optional-dependencies","title":"Optional Dependencies","text":"
ontodoc
).upgreek
LaTeX package (included in texlive-was
on RetHat-based distributions and texlive-latex-extra
on Ubuntu) for generation of pdf documentation. If your ontology contains exotic unicode characters, we recommend XeLaTeX.emmocheck
.emmocheck
.ontoversion
-tool.ontoversion
-tool.
"},{"location":"#attributions-and-credits","title":"Attributions and credits","text":"ontoconvert
may produce invalid turtle output (if your ontology contains real literals using scientific notation without a dot in the mantissa). This issue was fixed after the release of rdflib 5.0.0. Hence, install the latest rdflib from PyPI (pip install --upgrade rdflib
) or directly from the source code repository: GitHub if you need to serialise to turtle.
"},{"location":"#contributing-projects","title":"Contributing projects","text":"
"},{"location":"CHANGELOG/#v071-2024-02-29","title":"v0.7.1 (2024-02-29)","text":"
"},{"location":"CHANGELOG/#v070-2024-01-26","title":"v0.7.0 (2024-01-26)","text":"yield from
#720 (jesper-friis)
"},{"location":"CHANGELOG/#v063-2024-01-25","title":"v0.6.3 (2024-01-25)","text":"
"},{"location":"CHANGELOG/#v062-2024-01-23","title":"v0.6.2 (2024-01-23)","text":"
"},{"location":"CHANGELOG/#v061-2024-01-18","title":"v0.6.1 (2024-01-18)","text":"
"},{"location":"CHANGELOG/#v060-2023-06-19","title":"v0.6.0 (2023-06-19)","text":"
"},{"location":"CHANGELOG/#v054-2023-06-15","title":"v0.5.4 (2023-06-15)","text":"
"},{"location":"CHANGELOG/#v053-2023-06-12","title":"v0.5.3 (2023-06-12)","text":"
"},{"location":"CHANGELOG/#v052-2023-05-24","title":"v0.5.2 (2023-05-24)","text":"
"},{"location":"CHANGELOG/#v051-2023-02-07","title":"v0.5.1 (2023-02-07)","text":"is_defined
into a ThingClass property and improved its documentation. #597 (jesper-friis)
"},{"location":"CHANGELOG/#v050-2023-02-06","title":"v0.5.0 (2023-02-06)","text":"
LegacyVersion
does not exist in packaging.version
#540is_instance_of
property to be iterable #506images/material.png
#495
"},{"location":"CHANGELOG/#v040-2022-10-04","title":"v0.4.0 (2022-10-04)","text":"
bandit
failing #478
"},{"location":"CHANGELOG/#v031-2022-05-08","title":"v0.3.1 (2022-05-08)","text":"
"},{"location":"CHANGELOG/#v030-2022-05-05","title":"v0.3.0 (2022-05-05)","text":"
"},{"location":"CHANGELOG/#v020-2022-03-02","title":"v0.2.0 (2022-03-02)","text":"
pre-commit
#243Ontology
#228
rdflib
import #306get_triples()
method #280
"},{"location":"CHANGELOG/#v013-2021-10-27","title":"v0.1.3 (2021-10-27)","text":"ID!
type instead of String!
#375 (CasperWA)pre-commit
& various tools #245 (CasperWA)
collections
#236
factpluspluswrapper
folders #213mike
for versioned documentation #197
"},{"location":"CHANGELOG/#v101b-2021-07-01","title":"v1.0.1b (2021-07-01)","text":"packaging
to list of requirements #256 (CasperWA)collections.abc
when possible #240 (CasperWA)__init__.py
files for FaCT++ wrapper (again) #221 (CasperWA)
"},{"location":"CHANGELOG/#v101-2021-07-01","title":"v1.0.1 (2021-07-01)","text":"
"},{"location":"CHANGELOG/#v100-2021-03-25","title":"v1.0.0 (2021-03-25)","text":"
"},{"location":"CHANGELOG/#v100-alpha-30-2021-03-18","title":"v1.0.0-alpha-30 (2021-03-18)","text":"
"},{"location":"CHANGELOG/#v100-alpha-29-2021-03-16","title":"v1.0.0-alpha-29 (2021-03-16)","text":"
"},{"location":"CHANGELOG/#v100-alpha-28-2021-03-09","title":"v1.0.0-alpha-28 (2021-03-09)","text":"
"},{"location":"CHANGELOG/#v100-alpha-27-2021-02-27","title":"v1.0.0-alpha-27 (2021-02-27)","text":"
"},{"location":"CHANGELOG/#v100-alpha-26-2021-02-26","title":"v1.0.0-alpha-26 (2021-02-26)","text":"
"},{"location":"CHANGELOG/#v100-alpha-25-2021-01-17","title":"v1.0.0-alpha-25 (2021-01-17)","text":"
"},{"location":"CHANGELOG/#v100-alpha-24-2021-01-04","title":"v1.0.0-alpha-24 (2021-01-04)","text":"
"},{"location":"CHANGELOG/#v100-alpha-23-2021-01-04","title":"v1.0.0-alpha-23 (2021-01-04)","text":"
"},{"location":"CHANGELOG/#v100-alpha-22-2020-12-21","title":"v1.0.0-alpha-22 (2020-12-21)","text":"
"},{"location":"CHANGELOG/#v100-alpha-21b-2020-12-13","title":"v1.0.0-alpha-21b (2020-12-13)","text":"
"},{"location":"CHANGELOG/#v100-alpha-21-2020-12-11","title":"v1.0.0-alpha-21 (2020-12-11)","text":"
"},{"location":"CHANGELOG/#v100-alpha-20b-2020-11-04","title":"v1.0.0-alpha-20b (2020-11-04)","text":"
"},{"location":"CHANGELOG/#v100-alpha-20-2020-11-04","title":"v1.0.0-alpha-20 (2020-11-04)","text":"
"},{"location":"CHANGELOG/#v100-alpha-19-2020-11-02","title":"v1.0.0-alpha-19 (2020-11-02)","text":"
"},{"location":"CHANGELOG/#v100-alpha-18-2020-10-29","title":"v1.0.0-alpha-18 (2020-10-29)","text":"
"},{"location":"CHANGELOG/#v100-alpha-17-2020-10-21","title":"v1.0.0-alpha-17 (2020-10-21)","text":"
"},{"location":"CHANGELOG/#v100-alpha-16-2020-10-20","title":"v1.0.0-alpha-16 (2020-10-20)","text":"
"},{"location":"CHANGELOG/#v100-alpha-15-2020-09-25","title":"v1.0.0-alpha-15 (2020-09-25)","text":"
"},{"location":"CHANGELOG/#v100-alpha-13-2020-09-19","title":"v1.0.0-alpha-13 (2020-09-19)","text":"
"},{"location":"CHANGELOG/#v100-alpha-11-2020-08-12","title":"v1.0.0-alpha-11 (2020-08-12)","text":"
"},{"location":"CHANGELOG/#v100-alpha-10-2020-04-27","title":"v1.0.0-alpha-10 (2020-04-27)","text":"
"},{"location":"CHANGELOG/#v100-alpha-9-2020-04-13","title":"v1.0.0-alpha-9 (2020-04-13)","text":"
"},{"location":"CHANGELOG/#v100-alpha-8-2020-03-22","title":"v1.0.0-alpha-8 (2020-03-22)","text":"
"},{"location":"CHANGELOG/#v100-alpha-5-2020-03-18","title":"v1.0.0-alpha-5 (2020-03-18)","text":"
"},{"location":"CHANGELOG/#v100-alpha-3-2020-02-16","title":"v1.0.0-alpha-3 (2020-02-16)","text":"
"},{"location":"CHANGELOG/#v100-alpha-2020-01-08","title":"v1.0.0-alpha (2020-01-08)","text":"
"},{"location":"CHANGELOG/#v099-2019-07-14","title":"v0.9.9 (2019-07-14)","text":"
"},{"location":"docker-instructions/#build-docker-image","title":"Build Docker image","text":"git clone git@github.com:emmo-repo/EMMOntoPy.git\n
"},{"location":"docker-instructions/#run-docker-container","title":"Run Docker container","text":"cd EMMOntoPy\ndocker build -t emmo .\n
"},{"location":"docker-instructions/#notes","title":"Notes","text":"docker run -it emmo\n
sync_reasoner
). Append --memory=2GB
to docker run
in order to align the memory limit with the Java runtime environment.
"},{"location":"docker-instructions/#dockerfile-for-mounting-emmontopy-as-volume-mountdockerfile","title":"Dockerfile for mounting EMMOntoPy as volume (mount.Dockerfile)","text":""},{"location":"docker-instructions/#build-docker-image-mountdockerfile","title":"Build Docker image (mount.DockerFile)","text":"
"},{"location":"docker-instructions/#run-docker-container-mountdockerfile","title":"Run Docker container (mount.Dockerfile)","text":"docker build -t emmomount -f mount.Dockerfile .\n
docker run --rm -it -v $(pwd):/home/user/EMMOntoPy emmomount\n
docker run --rm -it -v ${PWD}:/home/user/EMMOntoPy emmomount\n
"},{"location":"docker-instructions/#notes-on-mounting-on-windows","title":"Notes on mounting on Windows","text":"cd EMMOntoPy\npip install .\n
Set-NetConnectionProfile -interfacealias \"vEthernet (DockerNAT)\" -NetworkCategory Private\n
"},{"location":"tools-instructions/","title":"Instructions for tools available in EMMOntoPy","text":"
"},{"location":"tools-instructions/#emmocheck","title":"emmocheck
","text":"
"},{"location":"tools-instructions/#options","title":"Options","text":"emmocheck [options] iri\n
"},{"location":"tools-instructions/#examples","title":"Examples","text":"positional arguments:\n iri File name or URI to the ontology to test.\n\noptional arguments:\n -h, --help show this help message and exit\n --database FILENAME, -d FILENAME\n Load ontology from Owlready2 sqlite3 database. The\n `iri` argument should in this case be the IRI of the\n ontology you want to check.\n --local, -l Load imported ontologies locally. Their paths are\n specified in Prot\u00e9g\u00e9 catalog files or via the --path\n option. The IRI should be a file name.\n --catalog-file CATALOG_FILE\n Name of Prot\u00e9g\u00e9 catalog file in the same folder as the\n ontology. This option is used together with --local\n and defaults to \"catalog-v001.xml\".\n --path PATH Paths where imported ontologies can be found. May be\n provided as a comma-separated string and/or with\n multiple --path options.\n --check-imported, -i Whether to check imported ontologies.\n --verbose, -v Verbosity level.\n --configfile CONFIGFILE, -c CONFIGFILE\n A yaml file with additional test configurations.\n --skip, -s ShellPattern\n Shell pattern matching tests to skip. This option may be\n provided multiple times.\n --url-from-catalog, -u\n Get url from catalog file.\n --ignore-namespace, -n\n Namespace to be ignored. Can be given multiple times\n
"},{"location":"tools-instructions/#example-configuration-file","title":"Example configuration file","text":" emmocheck http://emmo.info/emmo/1.0.0-alpha2\n emmocheck --database demo.sqlite3 http://www.emmc.info/emmc-csa/demo#\n emmocheck -l emmo.owl (in folder to which emmo was downloaded locally)\n emmocheck --check-imported --ignore-namespace=physicalistic --verbose --url-from-catalog emmo.owl (in folder with downloaded EMMO)\n emmocheck --check-imported --local --url-from-catalog --skip test_namespace emmo.owl\n
--configfile
option that will omit myunits.MyUnitCategory1
and myunits.MyUnitCategory1
from the unit dimensions test.
"},{"location":"tools-instructions/#ontoversion","title":"test_unit_dimensions:\n exceptions:\n - myunits.MyUnitCategory1\n - myunits.MyUnitCategory2\n
ontoversion
","text":"
"},{"location":"tools-instructions/#special-dependencies","title":"Special dependencies","text":"ontoversion [options] iri\n
"},{"location":"tools-instructions/#options_1","title":"Options","text":"rdflib
(Python package)
"},{"location":"tools-instructions/#examples_1","title":"Examples","text":"positional arguments:\n IRI IRI/file to OWL source to extract the version from.\n\noptional arguments:\n -h, --help show this help message and exit\n --format FORMAT, -f FORMAT\n OWL format. Default is \"xml\".\n
ontoversion http://emmo.info/emmo/1.0.0-alpha\n
ontograph
","text":"
"},{"location":"tools-instructions/#dependencies","title":"Dependencies","text":"ontograph [options] iri [output]\n
"},{"location":"tools-instructions/#options_2","title":"Options","text":"
"},{"location":"tools-instructions/#examples_2","title":"Examples","text":"positional arguments:\n IRI File name or URI of the ontology to visualise.\n output name of output file.\n\noptional arguments:\n -h, --help show this help message and exit\n --format FORMAT, -f FORMAT\n Format of output file. By default it is inferred from\n the output file extension.\n --database FILENAME, -d FILENAME\n Load ontology from Owlready2 sqlite3 database. The\n `iri` argument should in this case be the IRI of the\n ontology you want to visualise.\n --local, -l Load imported ontologies locally. Their paths are\n specified in Prot\u00e9g\u00e9 catalog files or via the --path\n option. The IRI should be a file name.\n --catalog-file CATALOG_FILE\n Name of Prot\u00e9g\u00e9 catalog file in the same folder as the\n ontology. This option is used together with --local\n and defaults to \"catalog-v001.xml\".\n --path PATH Paths where imported ontologies can be found. May be\n provided as a comma-separated string and/or with\n multiple --path options.\n --reasoner [{FaCT++,HermiT,Pellet}]\n Run given reasoner on the ontology. Valid reasoners\n are \"FaCT++\" (default), \"HermiT\" and \"Pellet\".\n Note: FaCT++ is preferred with EMMO.\n --root ROOT, -r ROOT Name of root node in the graph. Defaults to all\n classes.\n --leaves LEAVES Leaf nodes for plotting sub-graphs. May be provided\n as a comma-separated string and/or with multiple\n --leaves options.\n --exclude EXCLUDE, -E EXCLUDE\n Nodes, including their subclasses, to exclude from\n sub-graphs. May be provided as a comma-separated\n string and/or with multiple --exclude options.\n --parents N, -p N Adds N levels of parents to graph.\n --relations RELATIONS, -R RELATIONS\n Comma-separated string of relations to visualise.\n Default is \"isA\". \"all\" means include all relations.\n --edgelabels, -e Whether to add labels to edges.\n --addnodes, -n Whether to add missing target nodes in relations.\n --addconstructs, -c Whether to add nodes representing class constructs.\n --rankdir {BT,TB,RL,LR}\n Graph direction (from leaves to root). Possible values\n are: \"BT\" (bottom-top, default), \"TB\" (top-bottom),\n \"RL\" (right-left) and \"LR\" (left-right).\n --style-file JSON_FILE, -s JSON_FILE\n A json file with style definitions.\n --legend, -L Whether to add a legend to the graph.\n --generate-style-file JSON_FILE, -S JSON_FILE\n Write default style file to a json file.\n --plot-modules, -m Whether to plot module inter-dependencies instead of\n their content.\n --display, -D Whether to display graph.\n
The figure below is generated with the last command in the list above. ontograph --relations=all --legend --format=pdf emmo-inferred emmo.pdf # complete ontology\nontograph --root=Holistic --relations=hasInput,hasOutput,hasTemporaryParticipant,hasAgent --parents=2 --legend --leaves=Measurement,Manufacturing,CompleteManufacturing,ManufacturedProduct,CommercialProduct,Manufacturer --format=png --exclude=Task,Workflow,Computation,MaterialTreatment emmo-inferred measurement.png\nontograph --root=Material --relations=all --legend --format=png emmo-inferred material.png\n
ontodoc
","text":"Tool for documenting ontologies.
"},{"location":"tools-instructions/#usage_3","title":"Usage","text":"ontodoc [options] iri outfile\n
"},{"location":"tools-instructions/#dependencies_1","title":"Dependencies","text":"positional arguments:\n IRI File name or URI of the ontology to document.\n OUTFILE Output file.\n\n optional arguments:\n -h, --help show this help message and exit\n --database FILENAME, -d FILENAME\n Load ontology from Owlready2 sqlite3 database. The\n `iri` argument should in this case be the IRI of the\n ontology you want to document.\n --local, -l Load imported ontologies locally. Their paths are\n specified in Prot\u00e9g\u00e9 catalog files or via the --path\n option. The IRI should be a file name.\n --imported, -i Include imported ontologies\n --no-catalog, -n Do not read url from catalog even if it exists.\n --catalog-file CATALOG_FILE\n Name of Prot\u00e9g\u00e9 catalog file in the same folder as the\n ontology. This option is used together with --local\n and defaults to \"catalog-v001.xml\".\n --path PATH Paths where imported ontologies can be found. May be\n provided as a comma-separated string and/or with\n multiple --path options.\n --reasoner [{FaCT++,HermiT,Pellet}]\n Run given reasoner on the ontology. Valid reasoners\n are \"FaCT++\" (default), \"HermiT\" and \"Pellet\".\n Note: FaCT++ is preferred with EMMO.\n --template FILE, -t FILE\n ontodoc input template. If not provided, a simple\n default template will be used. Don't confuse it with\n the pandoc templates.\n --format FORMAT, -f FORMAT\n Output format. May be \"md\", \"simple-html\" or any other\n format supported by pandoc. By default the format is\n inferred from --output.\n --figdir DIR, -D DIR Default directory to store generated figures. If a\n relative path is given, it is relative to the template\n (see --template), or the current directory, if\n --template is not given. Default: \"genfigs\"\n --figformat FIGFORMAT, -F FIGFORMAT\n Format for generated figures. The default is inferred\n from --format.\"\n --max-figwidth MAX_FIGWIDTH, -w MAX_FIGWIDTH\n Maximum figure width. The default is inferred from\n --format.\n --pandoc-option STRING, -p STRING\n Additional pandoc long options overriding those read\n from --pandoc-option-file. It is possible to remove\n pandoc option --XXX with \"--pandoc-option=no-XXX\".\n This option may be provided multiple times.\n --pandoc-option-file FILE, -P FILE\n YAML file with additional pandoc options. Note, that\n default pandoc options are read from the files\n \"pandoc-options.yaml\" and \"pandoc-FORMAT-options.yaml\"\n (where FORMAT is format specified with --format). This\n option allows to override the defaults and add\n additional pandoc options. This option may be provided\n multiple times.\n --keep-generated FILE, -k FILE\n Keep a copy of generated markdown input file for\n pandoc (for debugging).\n
"},{"location":"tools-instructions/#examples_3","title":"Examples","text":"Basic documentation of an ontology demo.owl
can be generated with:
ontodoc --format=simple-html --local demo.owl demo.html\n
See examples/emmodoc/README.md for how this tool is used to generate the html and pdf documentation of EMMO itself.
"},{"location":"tools-instructions/#ontoconvert","title":"ontoconvert
","text":"Tool for converting between different ontology formats.
"},{"location":"tools-instructions/#usage_4","title":"Usage","text":"ontoconvert [options] inputfile outputfile\n
"},{"location":"tools-instructions/#dependencies_2","title":"Dependencies","text":"rdflib
(Python package)positional arguments:\n INPUTFILE Name of inputfile.\n OUTPUTFILE Name og output file.\n\n optional arguments:\n -h, --help show this help message and exit\n --input-format, -f INPUT_FORMAT\n Inputformat. Default is to infer from input.\n --output-format, -F OUTPUT_FORMAT\n Default is to infer from output.\n --no-catalog, -n Do not read catalog even if it exists.\n --inferred, -i Add additional relations inferred by the FaCT++ reasoner to the converted ontology. Implies --squash.\n --base-iri BASE_IRI, -b BASE_IRI\n Base iri of inferred ontology. The default is the base\n iri of the input ontology with \"-inferred\" appended to\n it. Used together with --inferred.\n\n --recursive, -r The output is written to the directories matching the input. This requires Protege catalog files to be present.\n --squash, -s Squash imported ontologies into a single output file.\n
"},{"location":"tools-instructions/#examples_4","title":"Examples","text":"ontoconvert --recursive emmo.ttl owl/emmo.owl\nontoconvert --inferred emmo.ttl emmo-inferred.owl\n
Note, it is then required to add the argument only_local=True
when loading the locally converted ontology in EMMOntoPy, e.g.:
from ontopy import get_ontology\n\nemmo_ontology = get_ontology(\"emmo.owl\").load(only_local=True)\n
Since the catalog file will be overwritten in the above example writing output to a separate directory is useful.
ontoconvert --recursive emmo.ttl owl/emmo.owl\n
"},{"location":"tools-instructions/#bugs","title":"Bugs","text":"Since parsing the results from the reasoner is currently broken in Owlready2 (v0.37), a workaround has been added to ontoconvert. This workaround only only supports FaCT++. Hence, HermiT and Pellet are currently not available.
"},{"location":"tools-instructions/#excel2onto","title":"excel2onto
","text":"Tool for converting EMMO-based ontologies from Excel to OWL, making it easy for non-ontologists to make EMMO-based domain ontologies.
The Excel file must be in the format provided by ontology_template.xlsx.
"},{"location":"tools-instructions/#usage_5","title":"Usage","text":"excel2onto [options] excelpath\n
"},{"location":"tools-instructions/#dependencies_3","title":"Dependencies","text":"pandas
(Python package)positional arguments:\n excelpath path to excel book\n\noptions:\n -h, --help show this help message and exit\n --output OUTPUT, -o OUTPUT\n Name of output ontology, \u00b4ontology.ttl\u00b4 is default\n --force, -f Whether to force generation of ontology on non-fatal\n error.\n
See the documentation of the python api for a thorough description of the requirements on the Excel workbook.
"},{"location":"tools-instructions/#examples_5","title":"Examples","text":"Create a new_ontology.ttl
turtle file from the Excel file new_ontology.xlsx
:
excel2onto -o new_ontology.ttl new_ontology.xlsx\n
"},{"location":"tools-instructions/#bugs_1","title":"Bugs","text":"equivalentTo
is currently not supported.
A module for testing an ontology against conventions defined for EMMO.
A YAML file can be provided with additional test configurations.
Example configuration file:
test_unit_dimensions:\n exceptions:\n - myunits.MyUnitCategory1\n - myunits.MyUnitCategory2\n\nskip:\n - name_of_test_to_skip\n\nenable:\n - name_of_test_to_enable\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestEMMOConventions","title":" TestEMMOConventions
","text":"Base class for testing an ontology against EMMO conventions.
Source code inemmopy/emmocheck.py
class TestEMMOConventions(unittest.TestCase):\n \"\"\"Base class for testing an ontology against EMMO conventions.\"\"\"\n\n config = {} # configurations\n\n def get_config(self, string, default=None):\n \"\"\"Returns the configuration specified by `string`.\n\n If configuration is not found in the configuration file, `default` is\n returned.\n\n Sub-configurations can be accessed by separating the components with\n dots, like \"test_namespace.exceptions\".\n \"\"\"\n result = self.config\n try:\n for token in string.split(\".\"):\n result = result[token]\n except KeyError:\n return default\n return result\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestEMMOConventions.get_config","title":"get_config(self, string, default=None)
","text":"Returns the configuration specified by string
.
If configuration is not found in the configuration file, default
is returned.
Sub-configurations can be accessed by separating the components with dots, like \"test_namespace.exceptions\".
Source code inemmopy/emmocheck.py
def get_config(self, string, default=None):\n \"\"\"Returns the configuration specified by `string`.\n\n If configuration is not found in the configuration file, `default` is\n returned.\n\n Sub-configurations can be accessed by separating the components with\n dots, like \"test_namespace.exceptions\".\n \"\"\"\n result = self.config\n try:\n for token in string.split(\".\"):\n result = result[token]\n except KeyError:\n return default\n return result\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions","title":" TestFunctionalEMMOConventions
","text":"Test functional EMMO conventions.
Source code inemmopy/emmocheck.py
class TestFunctionalEMMOConventions(TestEMMOConventions):\n \"\"\"Test functional EMMO conventions.\"\"\"\n\n def test_unit_dimension(self):\n \"\"\"Check that all measurement units have a physical dimension.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"metrology.MultipleUnit\",\n \"metrology.SubMultipleUnit\",\n \"metrology.OffSystemUnit\",\n \"metrology.PrefixedUnit\",\n \"metrology.NonPrefixedUnit\",\n \"metrology.SpecialUnit\",\n \"metrology.DerivedUnit\",\n \"metrology.BaseUnit\",\n \"metrology.UnitSymbol\",\n \"siunits.SICoherentDerivedUnit\",\n \"siunits.SINonCoherentDerivedUnit\",\n \"siunits.SISpecialUnit\",\n \"siunits.SICoherentUnit\",\n \"siunits.SIPrefixedUnit\",\n \"siunits.SIBaseUnit\",\n \"siunits.SIUnitSymbol\",\n \"siunits.SIUnit\",\n \"emmo.MultipleUnit\",\n \"emmo.SubMultipleUnit\",\n \"emmo.OffSystemUnit\",\n \"emmo.PrefixedUnit\",\n \"emmo.NonPrefixedUnit\",\n \"emmo.SpecialUnit\",\n \"emmo.DerivedUnit\",\n \"emmo.BaseUnit\",\n \"emmo.UnitSymbol\",\n \"emmo.SIAccepted\",\n \"emmo.SICoherentDerivedUnit\",\n \"emmo.SINonCoherentDerivedUnit\",\n \"emmo.SISpecialUnit\",\n \"emmo.SICoherentUnit\",\n \"emmo.SIPrefixedUnit\",\n \"emmo.SIBaseUnit\",\n \"emmo.SIUnitSymbol\",\n \"emmo.SIUnit\",\n )\n )\n if not hasattr(self.onto, \"MeasurementUnit\"):\n return\n exceptions.update(self.get_config(\"test_unit_dimension.exceptions\", ()))\n regex = re.compile(r\"^(emmo|metrology).hasDimensionString.value\\(.*\\)$\")\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.MeasurementUnit.descendants():\n if not self.check_imported and cls not in classes:\n continue\n # Assume that actual units are not subclassed\n if not list(cls.subclasses()) and repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertTrue(\n any(\n regex.match(repr(r))\n for r in cls.get_indirect_is_a()\n ),\n msg=cls,\n )\n\n def test_quantity_dimension_beta3(self):\n \"\"\"Check that all quantities have a physicalDimension annotation.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.PhysicoChemical\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Universal\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n regex = re.compile(\n \"^T([+-][1-9]|0) L([+-][1-9]|0) M([+-][1-9]|0) I([+-][1-9]|0) \"\n \"(H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) J([+-][1-9]|0)$\"\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n anno = cls.get_annotations()\n self.assertIn(\"physicalDimension\", anno, msg=cls)\n physdim = anno[\"physicalDimension\"].first()\n self.assertRegex(physdim, regex, msg=cls)\n\n def test_quantity_dimension(self):\n \"\"\"Check that all quantities have a physicalDimension.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n # pylint: disable=invalid-name\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.ISO80000Categorised\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.EquilibriumConstant\", # physical dimension may change\n \"emmo.Solubility\",\n \"emmo.Universal\",\n \"emmo.Intensive\",\n \"emmo.Extensive\",\n \"emmo.Concentration\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if issubclass(cls, self.onto.ISO80000Categorised):\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n for r in cls.get_indirect_is_a():\n if isinstance(r, owlready2.Restriction) and repr(\n r\n ).startswith(\"emmo.hasMeasurementUnit.some\"):\n self.assertTrue(\n issubclass(\n r.value,\n (\n self.onto.DimensionalUnit,\n self.onto.DimensionlessUnit,\n ),\n )\n )\n break\n else:\n self.assertTrue(\n issubclass(cls, self.onto.ISQDimensionlessQuantity)\n )\n\n def test_dimensional_unit(self):\n \"\"\"Check correct syntax of dimension string of dimensional units.\"\"\"\n\n # This test requires that the ontology has imported SIDimensionalUnit\n if \"SIDimensionalUnit\" not in self.onto:\n self.skipTest(\"SIDimensionalUnit is not imported\")\n\n # pylint: disable=invalid-name\n regex = re.compile(\n \"^T([+-][1-9][0-9]*|0) L([+-][1-9]|0) M([+-][1-9]|0) \"\n \"I([+-][1-9]|0) (H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) \"\n \"J([+-][1-9]|0)$\"\n )\n for cls in self.onto.SIDimensionalUnit.__subclasses__():\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertEqual(len(cls.equivalent_to), 1)\n r = cls.equivalent_to[0]\n self.assertIsInstance(r, owlready2.Restriction)\n self.assertRegex(r.value, regex)\n\n def test_physical_quantity_dimension(self):\n \"\"\"Check that all physical quantities have `hasPhysicalDimension`.\n\n Note: this test will fail before isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n\n \"\"\"\n exceptions = set(\n (\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclearPhysicsQuantity\",\n \"emmo.ThermodynamicalQuantity\",\n \"emmo.LightAndRadiationQuantity\",\n \"emmo.SpaceAndTimeQuantity\",\n \"emmo.AcousticQuantity\",\n \"emmo.PhysioChememicalQuantity\",\n \"emmo.ElectromagneticQuantity\",\n \"emmo.MechanicalQuantity\",\n \"emmo.CondensedMatterPhysicsQuantity\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Extensive\",\n \"emmo.Intensive\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_physical_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n try:\n class_props = cls.INDIRECT_get_class_properties()\n except AttributeError:\n # The INDIRECT_get_class_properties() method\n # does not support inverse properties. Build\n # class_props manually...\n class_props = set()\n for _ in cls.mro():\n if hasattr(_, \"is_a\"):\n class_props.update(\n [\n restriction.property\n for restriction in _.is_a\n if isinstance(\n restriction, owlready2.Restriction\n )\n ]\n )\n\n self.assertIn(\n self.onto.hasPhysicalDimension, class_props, msg=cls\n )\n\n def test_namespace(self):\n \"\"\"Check that all IRIs are namespaced after their (sub)ontology.\n\n Configurations:\n exceptions - full name of entities to ignore.\n \"\"\"\n exceptions = set(\n (\n \"owl.qualifiedCardinality\",\n \"owl.minQualifiedCardinality\",\n \"terms.creator\",\n \"terms.contributor\",\n \"terms.publisher\",\n \"terms.title\",\n \"terms.license\",\n \"terms.abstract\",\n \"core.prefLabel\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"mereotopology.Item\",\n \"manufacturing.EngineeredMaterial\",\n )\n )\n exceptions.update(self.get_config(\"test_namespace.exceptions\", ()))\n\n def checker(onto, ignore_namespace):\n if list(\n filter(onto.base_iri.strip(\"#\").endswith, self.ignore_namespace)\n ):\n print(f\"Skipping namespace: {onto.base_iri}\")\n return\n entities = itertools.chain(\n onto.classes(),\n onto.object_properties(),\n onto.data_properties(),\n onto.individuals(),\n onto.annotation_properties(),\n )\n for entity in entities:\n if entity not in visited and repr(entity) not in exceptions:\n visited.add(entity)\n with self.subTest(\n iri=entity.iri,\n base_iri=onto.base_iri,\n entity=repr(entity),\n ):\n self.assertTrue(\n entity.iri.endswith(entity.name),\n msg=(\n \"the final part of entity IRIs must be their \"\n \"name\"\n ),\n )\n self.assertEqual(\n entity.iri,\n onto.base_iri + entity.name,\n msg=(\n f\"IRI {entity.iri!r} does not correspond to \"\n f\"module namespace: {onto.base_iri!r}\"\n ),\n )\n\n if self.check_imported:\n for imp_onto in onto.imported_ontologies:\n if imp_onto not in visited_onto:\n visited_onto.add(imp_onto)\n checker(imp_onto, ignore_namespace)\n\n visited = set()\n visited_onto = set()\n checker(self.onto, self.ignore_namespace)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_dimensional_unit","title":"test_dimensional_unit(self)
","text":"Check correct syntax of dimension string of dimensional units.
Source code inemmopy/emmocheck.py
def test_dimensional_unit(self):\n \"\"\"Check correct syntax of dimension string of dimensional units.\"\"\"\n\n # This test requires that the ontology has imported SIDimensionalUnit\n if \"SIDimensionalUnit\" not in self.onto:\n self.skipTest(\"SIDimensionalUnit is not imported\")\n\n # pylint: disable=invalid-name\n regex = re.compile(\n \"^T([+-][1-9][0-9]*|0) L([+-][1-9]|0) M([+-][1-9]|0) \"\n \"I([+-][1-9]|0) (H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) \"\n \"J([+-][1-9]|0)$\"\n )\n for cls in self.onto.SIDimensionalUnit.__subclasses__():\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertEqual(len(cls.equivalent_to), 1)\n r = cls.equivalent_to[0]\n self.assertIsInstance(r, owlready2.Restriction)\n self.assertRegex(r.value, regex)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_namespace","title":"test_namespace(self)
","text":"Check that all IRIs are namespaced after their (sub)ontology.
Configurations
exceptions - full name of entities to ignore.
Source code inemmopy/emmocheck.py
def test_namespace(self):\n \"\"\"Check that all IRIs are namespaced after their (sub)ontology.\n\n Configurations:\n exceptions - full name of entities to ignore.\n \"\"\"\n exceptions = set(\n (\n \"owl.qualifiedCardinality\",\n \"owl.minQualifiedCardinality\",\n \"terms.creator\",\n \"terms.contributor\",\n \"terms.publisher\",\n \"terms.title\",\n \"terms.license\",\n \"terms.abstract\",\n \"core.prefLabel\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"mereotopology.Item\",\n \"manufacturing.EngineeredMaterial\",\n )\n )\n exceptions.update(self.get_config(\"test_namespace.exceptions\", ()))\n\n def checker(onto, ignore_namespace):\n if list(\n filter(onto.base_iri.strip(\"#\").endswith, self.ignore_namespace)\n ):\n print(f\"Skipping namespace: {onto.base_iri}\")\n return\n entities = itertools.chain(\n onto.classes(),\n onto.object_properties(),\n onto.data_properties(),\n onto.individuals(),\n onto.annotation_properties(),\n )\n for entity in entities:\n if entity not in visited and repr(entity) not in exceptions:\n visited.add(entity)\n with self.subTest(\n iri=entity.iri,\n base_iri=onto.base_iri,\n entity=repr(entity),\n ):\n self.assertTrue(\n entity.iri.endswith(entity.name),\n msg=(\n \"the final part of entity IRIs must be their \"\n \"name\"\n ),\n )\n self.assertEqual(\n entity.iri,\n onto.base_iri + entity.name,\n msg=(\n f\"IRI {entity.iri!r} does not correspond to \"\n f\"module namespace: {onto.base_iri!r}\"\n ),\n )\n\n if self.check_imported:\n for imp_onto in onto.imported_ontologies:\n if imp_onto not in visited_onto:\n visited_onto.add(imp_onto)\n checker(imp_onto, ignore_namespace)\n\n visited = set()\n visited_onto = set()\n checker(self.onto, self.ignore_namespace)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_physical_quantity_dimension","title":"test_physical_quantity_dimension(self)
","text":"Check that all physical quantities have hasPhysicalDimension
.
Note: this test will fail before isq is moved to emmo/domain.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_physical_quantity_dimension(self):\n \"\"\"Check that all physical quantities have `hasPhysicalDimension`.\n\n Note: this test will fail before isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n\n \"\"\"\n exceptions = set(\n (\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclearPhysicsQuantity\",\n \"emmo.ThermodynamicalQuantity\",\n \"emmo.LightAndRadiationQuantity\",\n \"emmo.SpaceAndTimeQuantity\",\n \"emmo.AcousticQuantity\",\n \"emmo.PhysioChememicalQuantity\",\n \"emmo.ElectromagneticQuantity\",\n \"emmo.MechanicalQuantity\",\n \"emmo.CondensedMatterPhysicsQuantity\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Extensive\",\n \"emmo.Intensive\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_physical_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n try:\n class_props = cls.INDIRECT_get_class_properties()\n except AttributeError:\n # The INDIRECT_get_class_properties() method\n # does not support inverse properties. Build\n # class_props manually...\n class_props = set()\n for _ in cls.mro():\n if hasattr(_, \"is_a\"):\n class_props.update(\n [\n restriction.property\n for restriction in _.is_a\n if isinstance(\n restriction, owlready2.Restriction\n )\n ]\n )\n\n self.assertIn(\n self.onto.hasPhysicalDimension, class_props, msg=cls\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_quantity_dimension","title":"test_quantity_dimension(self)
","text":"Check that all quantities have a physicalDimension.
Note: this test will be deprecated when isq is moved to emmo/domain.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_quantity_dimension(self):\n \"\"\"Check that all quantities have a physicalDimension.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n # pylint: disable=invalid-name\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.ISO80000Categorised\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.EquilibriumConstant\", # physical dimension may change\n \"emmo.Solubility\",\n \"emmo.Universal\",\n \"emmo.Intensive\",\n \"emmo.Extensive\",\n \"emmo.Concentration\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if issubclass(cls, self.onto.ISO80000Categorised):\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n for r in cls.get_indirect_is_a():\n if isinstance(r, owlready2.Restriction) and repr(\n r\n ).startswith(\"emmo.hasMeasurementUnit.some\"):\n self.assertTrue(\n issubclass(\n r.value,\n (\n self.onto.DimensionalUnit,\n self.onto.DimensionlessUnit,\n ),\n )\n )\n break\n else:\n self.assertTrue(\n issubclass(cls, self.onto.ISQDimensionlessQuantity)\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_quantity_dimension_beta3","title":"test_quantity_dimension_beta3(self)
","text":"Check that all quantities have a physicalDimension annotation.
Note: this test will be deprecated when isq is moved to emmo/domain.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_quantity_dimension_beta3(self):\n \"\"\"Check that all quantities have a physicalDimension annotation.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.PhysicoChemical\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Universal\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n regex = re.compile(\n \"^T([+-][1-9]|0) L([+-][1-9]|0) M([+-][1-9]|0) I([+-][1-9]|0) \"\n \"(H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) J([+-][1-9]|0)$\"\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n anno = cls.get_annotations()\n self.assertIn(\"physicalDimension\", anno, msg=cls)\n physdim = anno[\"physicalDimension\"].first()\n self.assertRegex(physdim, regex, msg=cls)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_unit_dimension","title":"test_unit_dimension(self)
","text":"Check that all measurement units have a physical dimension.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_unit_dimension(self):\n \"\"\"Check that all measurement units have a physical dimension.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"metrology.MultipleUnit\",\n \"metrology.SubMultipleUnit\",\n \"metrology.OffSystemUnit\",\n \"metrology.PrefixedUnit\",\n \"metrology.NonPrefixedUnit\",\n \"metrology.SpecialUnit\",\n \"metrology.DerivedUnit\",\n \"metrology.BaseUnit\",\n \"metrology.UnitSymbol\",\n \"siunits.SICoherentDerivedUnit\",\n \"siunits.SINonCoherentDerivedUnit\",\n \"siunits.SISpecialUnit\",\n \"siunits.SICoherentUnit\",\n \"siunits.SIPrefixedUnit\",\n \"siunits.SIBaseUnit\",\n \"siunits.SIUnitSymbol\",\n \"siunits.SIUnit\",\n \"emmo.MultipleUnit\",\n \"emmo.SubMultipleUnit\",\n \"emmo.OffSystemUnit\",\n \"emmo.PrefixedUnit\",\n \"emmo.NonPrefixedUnit\",\n \"emmo.SpecialUnit\",\n \"emmo.DerivedUnit\",\n \"emmo.BaseUnit\",\n \"emmo.UnitSymbol\",\n \"emmo.SIAccepted\",\n \"emmo.SICoherentDerivedUnit\",\n \"emmo.SINonCoherentDerivedUnit\",\n \"emmo.SISpecialUnit\",\n \"emmo.SICoherentUnit\",\n \"emmo.SIPrefixedUnit\",\n \"emmo.SIBaseUnit\",\n \"emmo.SIUnitSymbol\",\n \"emmo.SIUnit\",\n )\n )\n if not hasattr(self.onto, \"MeasurementUnit\"):\n return\n exceptions.update(self.get_config(\"test_unit_dimension.exceptions\", ()))\n regex = re.compile(r\"^(emmo|metrology).hasDimensionString.value\\(.*\\)$\")\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.MeasurementUnit.descendants():\n if not self.check_imported and cls not in classes:\n continue\n # Assume that actual units are not subclassed\n if not list(cls.subclasses()) and repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertTrue(\n any(\n regex.match(repr(r))\n for r in cls.get_indirect_is_a()\n ),\n msg=cls,\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions","title":" TestSyntacticEMMOConventions
","text":"Test syntactic EMMO conventions.
Source code inemmopy/emmocheck.py
class TestSyntacticEMMOConventions(TestEMMOConventions):\n \"\"\"Test syntactic EMMO conventions.\"\"\"\n\n def test_number_of_labels(self):\n \"\"\"Check that all entities have one and only one prefLabel.\n\n Use \"altLabel\" for synonyms.\n\n The only allowed exception is entities who's representation\n starts with \"owl.\".\n \"\"\"\n exceptions = set(\n (\n \"0.1.homepage\", # foaf:homepage\n \"0.1.logo\",\n \"0.1.page\",\n \"0.1.name\",\n \"bibo:doi\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"core.prefLabel\",\n \"terms.abstract\",\n \"terms.alternative\",\n \"terms:bibliographicCitation\",\n \"terms.contributor\",\n \"terms.created\",\n \"terms.creator\",\n \"terms.hasFormat\",\n \"terms.identifier\",\n \"terms.issued\",\n \"terms.license\",\n \"terms.modified\",\n \"terms.publisher\",\n \"terms.source\",\n \"terms.title\",\n \"vann:preferredNamespacePrefix\",\n \"vann:preferredNamespaceUri\",\n )\n )\n exceptions.update(\n self.get_config(\"test_number_of_labels.exceptions\", ())\n )\n if (\n \"prefLabel\"\n in self.onto.world._props # pylint: disable=protected-access\n ):\n for entity in self.onto.classes(self.check_imported):\n if repr(entity) not in exceptions:\n with self.subTest(\n entity=entity,\n label=get_label(entity),\n prefLabels=entity.prefLabel,\n ):\n if not repr(entity).startswith(\"owl.\"):\n self.assertTrue(hasattr(entity, \"prefLabel\"))\n self.assertEqual(1, len(entity.prefLabel))\n else:\n self.fail(\"ontology has no prefLabel\")\n\n def test_class_label(self):\n \"\"\"Check that class labels are CamelCase and valid identifiers.\n\n For CamelCase, we are currently only checking that the labels\n start with upper case.\n \"\"\"\n exceptions = set(\n (\n \"0-manifold\", # not needed in 1.0.0-beta\n \"1-manifold\",\n \"2-manifold\",\n \"3-manifold\",\n \"C++\",\n \"3DPrinting\",\n )\n )\n exceptions.update(self.get_config(\"test_class_label.exceptions\", ()))\n\n for cls in self.onto.classes(self.check_imported):\n for label in cls.label + getattr(cls, \"prefLabel\", []):\n if str(label) not in exceptions:\n with self.subTest(entity=cls, label=label):\n self.assertTrue(label.isidentifier())\n self.assertTrue(label[0].isupper())\n\n def test_object_property_label(self):\n \"\"\"Check that object property labels are lowerCamelCase.\n\n Allowed exceptions: \"EMMORelation\"\n\n If they start with \"has\" or \"is\" they should be followed by a\n upper case letter.\n\n If they start with \"is\" they should also end with \"Of\".\n \"\"\"\n exceptions = set((\"EMMORelation\",))\n exceptions.update(\n self.get_config(\"test_object_property_label.exceptions\", ())\n )\n\n for obj_prop in self.onto.object_properties():\n if repr(obj_prop) not in exceptions:\n for label in obj_prop.label:\n with self.subTest(entity=obj_prop, label=label):\n self.assertTrue(\n label[0].islower(), \"label start with lowercase\"\n )\n if label.startswith(\"has\"):\n self.assertTrue(\n label[3].isupper(),\n 'what follows \"has\" must be \"uppercase\"',\n )\n if label.startswith(\"is\"):\n self.assertTrue(\n label[2].isupper(),\n 'what follows \"is\" must be \"uppercase\"',\n )\n self.assertTrue(\n label.endswith((\"Of\", \"With\")),\n 'should end with \"Of\" or \"With\"',\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions.test_class_label","title":"test_class_label(self)
","text":"Check that class labels are CamelCase and valid identifiers.
For CamelCase, we are currently only checking that the labels start with upper case.
Source code inemmopy/emmocheck.py
def test_class_label(self):\n \"\"\"Check that class labels are CamelCase and valid identifiers.\n\n For CamelCase, we are currently only checking that the labels\n start with upper case.\n \"\"\"\n exceptions = set(\n (\n \"0-manifold\", # not needed in 1.0.0-beta\n \"1-manifold\",\n \"2-manifold\",\n \"3-manifold\",\n \"C++\",\n \"3DPrinting\",\n )\n )\n exceptions.update(self.get_config(\"test_class_label.exceptions\", ()))\n\n for cls in self.onto.classes(self.check_imported):\n for label in cls.label + getattr(cls, \"prefLabel\", []):\n if str(label) not in exceptions:\n with self.subTest(entity=cls, label=label):\n self.assertTrue(label.isidentifier())\n self.assertTrue(label[0].isupper())\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions.test_number_of_labels","title":"test_number_of_labels(self)
","text":"Check that all entities have one and only one prefLabel.
Use \"altLabel\" for synonyms.
The only allowed exception is entities who's representation starts with \"owl.\".
Source code inemmopy/emmocheck.py
def test_number_of_labels(self):\n \"\"\"Check that all entities have one and only one prefLabel.\n\n Use \"altLabel\" for synonyms.\n\n The only allowed exception is entities who's representation\n starts with \"owl.\".\n \"\"\"\n exceptions = set(\n (\n \"0.1.homepage\", # foaf:homepage\n \"0.1.logo\",\n \"0.1.page\",\n \"0.1.name\",\n \"bibo:doi\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"core.prefLabel\",\n \"terms.abstract\",\n \"terms.alternative\",\n \"terms:bibliographicCitation\",\n \"terms.contributor\",\n \"terms.created\",\n \"terms.creator\",\n \"terms.hasFormat\",\n \"terms.identifier\",\n \"terms.issued\",\n \"terms.license\",\n \"terms.modified\",\n \"terms.publisher\",\n \"terms.source\",\n \"terms.title\",\n \"vann:preferredNamespacePrefix\",\n \"vann:preferredNamespaceUri\",\n )\n )\n exceptions.update(\n self.get_config(\"test_number_of_labels.exceptions\", ())\n )\n if (\n \"prefLabel\"\n in self.onto.world._props # pylint: disable=protected-access\n ):\n for entity in self.onto.classes(self.check_imported):\n if repr(entity) not in exceptions:\n with self.subTest(\n entity=entity,\n label=get_label(entity),\n prefLabels=entity.prefLabel,\n ):\n if not repr(entity).startswith(\"owl.\"):\n self.assertTrue(hasattr(entity, \"prefLabel\"))\n self.assertEqual(1, len(entity.prefLabel))\n else:\n self.fail(\"ontology has no prefLabel\")\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions.test_object_property_label","title":"test_object_property_label(self)
","text":"Check that object property labels are lowerCamelCase.
Allowed exceptions: \"EMMORelation\"
If they start with \"has\" or \"is\" they should be followed by a upper case letter.
If they start with \"is\" they should also end with \"Of\".
Source code inemmopy/emmocheck.py
def test_object_property_label(self):\n \"\"\"Check that object property labels are lowerCamelCase.\n\n Allowed exceptions: \"EMMORelation\"\n\n If they start with \"has\" or \"is\" they should be followed by a\n upper case letter.\n\n If they start with \"is\" they should also end with \"Of\".\n \"\"\"\n exceptions = set((\"EMMORelation\",))\n exceptions.update(\n self.get_config(\"test_object_property_label.exceptions\", ())\n )\n\n for obj_prop in self.onto.object_properties():\n if repr(obj_prop) not in exceptions:\n for label in obj_prop.label:\n with self.subTest(entity=obj_prop, label=label):\n self.assertTrue(\n label[0].islower(), \"label start with lowercase\"\n )\n if label.startswith(\"has\"):\n self.assertTrue(\n label[3].isupper(),\n 'what follows \"has\" must be \"uppercase\"',\n )\n if label.startswith(\"is\"):\n self.assertTrue(\n label[2].isupper(),\n 'what follows \"is\" must be \"uppercase\"',\n )\n self.assertTrue(\n label.endswith((\"Of\", \"With\")),\n 'should end with \"Of\" or \"With\"',\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.main","title":"main(argv=None)
","text":"Run all checks on ontology iri
.
Default is 'http://emmo.info/emmo'.
Parameters:
Name Type Description Defaultargv
list
List of arguments, similar to sys.argv[1:]
. Mainly for testing purposes, since it allows one to invoke the tool manually / through Python.
None
Source code in emmopy/emmocheck.py
def main(\n argv: list = None,\n): # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n \"\"\"Run all checks on ontology `iri`.\n\n Default is 'http://emmo.info/emmo'.\n\n Parameters:\n argv: List of arguments, similar to `sys.argv[1:]`.\n Mainly for testing purposes, since it allows one to invoke the tool\n manually / through Python.\n\n \"\"\"\n parser = argparse.ArgumentParser(description=__doc__)\n parser.add_argument(\"iri\", help=\"File name or URI to the ontology to test.\")\n parser.add_argument(\n \"--database\",\n \"-d\",\n metavar=\"FILENAME\",\n default=\":memory:\",\n help=(\n \"Load ontology from Owlready2 sqlite3 database. The `iri` argument\"\n \" should in this case be the IRI of the ontology you want to \"\n \"check.\"\n ),\n )\n parser.add_argument(\n \"--local\",\n \"-l\",\n action=\"store_true\",\n help=(\n \"Load imported ontologies locally. Their paths are specified in \"\n \"Prot\u00e8g\u00e8 catalog files or via the --path option. The IRI should \"\n \"be a file name.\"\n ),\n )\n parser.add_argument(\n \"--catalog-file\",\n default=\"catalog-v001.xml\",\n help=(\n \"Name of Prot\u00e8g\u00e8 catalog file in the same folder as the ontology. \"\n \"This option is used together with --local and defaults to \"\n '\"catalog-v001.xml\".'\n ),\n )\n parser.add_argument(\n \"--path\",\n action=\"append\",\n default=[],\n help=(\n \"Paths where imported ontologies can be found. May be provided as \"\n \"a comma-separated string and/or with multiple --path options.\"\n ),\n )\n parser.add_argument(\n \"--check-imported\",\n \"-i\",\n action=\"store_true\",\n help=\"Whether to check imported ontologies.\",\n )\n parser.add_argument(\n \"--verbose\", \"-v\", action=\"store_true\", help=\"Verbosity level.\"\n )\n parser.add_argument(\n \"--configfile\",\n \"-c\",\n help=\"A yaml file with additional test configurations.\",\n )\n parser.add_argument(\n \"--skip\",\n \"-s\",\n action=\"append\",\n default=[],\n help=(\n \"Shell pattern matching tests to skip. This option may be \"\n \"provided multiple times.\"\n ),\n )\n parser.add_argument(\n \"--enable\",\n \"-e\",\n action=\"append\",\n default=[],\n help=(\n \"Shell pattern matching tests to enable that have been skipped by \"\n \"default or in the config file. This option may be provided \"\n \"multiple times.\"\n ),\n )\n parser.add_argument( # deprecated, replaced by --no-catalog\n \"--url-from-catalog\",\n \"-u\",\n default=None,\n action=\"store_true\",\n help=\"Get url from catalog file\",\n )\n parser.add_argument(\n \"--no-catalog\",\n action=\"store_false\",\n dest=\"url_from_catalog\",\n default=None,\n help=\"Whether to not read catalog file even if it exists.\",\n )\n parser.add_argument(\n \"--ignore-namespace\",\n \"-n\",\n action=\"append\",\n default=[],\n help=\"Namespace to be ignored. Can be given multiple times\",\n )\n\n # Options to pass forward to unittest\n parser.add_argument(\n \"--buffer\",\n \"-b\",\n dest=\"unittest\",\n action=\"append_const\",\n const=\"-b\",\n help=(\n \"The standard output and standard error streams are buffered \"\n \"during the test run. Output during a passing test is discarded. \"\n \"Output is echoed normally on test fail or error and is added to \"\n \"the failure messages.\"\n ),\n )\n parser.add_argument(\n \"--catch\",\n dest=\"unittest\",\n action=\"append_const\",\n const=\"-c\",\n help=(\n \"Control-C during the test run waits for the current test to end \"\n \"and then reports all the results so far. A second control-C \"\n \"raises the normal KeyboardInterrupt exception\"\n ),\n )\n parser.add_argument(\n \"--failfast\",\n \"-f\",\n dest=\"unittest\",\n action=\"append_const\",\n const=\"-f\",\n help=\"Stop the test run on the first error or failure.\",\n )\n try:\n args = parser.parse_args(args=argv)\n sys.argv[1:] = args.unittest if args.unittest else []\n if args.verbose:\n sys.argv.append(\"-v\")\n except SystemExit as exc:\n sys.exit(exc.code) # Exit without traceback on invalid arguments\n\n # Append to onto_path\n for paths in args.path:\n for path in paths.split(\",\"):\n if path not in onto_path:\n onto_path.append(path)\n\n # Load ontology\n world = World(filename=args.database)\n if args.database != \":memory:\" and args.iri not in world.ontologies:\n parser.error(\n \"The IRI argument should be one of the ontologies in \"\n \"the database:\\n \" + \"\\n \".join(world.ontologies.keys())\n )\n\n onto = world.get_ontology(args.iri)\n onto.load(\n only_local=args.local,\n url_from_catalog=args.url_from_catalog,\n catalog_file=args.catalog_file,\n )\n\n # Store settings TestEMMOConventions\n TestEMMOConventions.onto = onto\n TestEMMOConventions.check_imported = args.check_imported\n TestEMMOConventions.ignore_namespace = args.ignore_namespace\n\n # Configure tests\n verbosity = 2 if args.verbose else 1\n if args.configfile:\n import yaml # pylint: disable=import-outside-toplevel\n\n with open(args.configfile, \"rt\") as handle:\n TestEMMOConventions.config.update(\n yaml.load(handle, Loader=yaml.SafeLoader)\n )\n\n # Run all subclasses of TestEMMOConventions as test suites\n status = 0\n for cls in TestEMMOConventions.__subclasses__():\n # pylint: disable=cell-var-from-loop,undefined-loop-variable\n\n suite = unittest.TestLoader().loadTestsFromTestCase(cls)\n\n # Mark tests to be skipped\n for test in suite:\n name = test.id().split(\".\")[-1]\n skipped = set( # skipped by default\n [\n \"test_namespace\",\n \"test_physical_quantity_dimension_annotation\",\n \"test_quantity_dimension_beta3\",\n \"test_physical_quantity_dimension\",\n ]\n )\n msg = {name: \"skipped by default\" for name in skipped}\n\n # enable/skip tests from config file\n for pattern in test.get_config(\"enable\", ()):\n if fnmatch.fnmatchcase(name, pattern):\n skipped.remove(name)\n for pattern in test.get_config(\"skip\", ()):\n if fnmatch.fnmatchcase(name, pattern):\n skipped.add(name)\n msg[name] = \"skipped from config file\"\n\n # enable/skip from command line\n for pattern in args.enable:\n if fnmatch.fnmatchcase(name, pattern):\n skipped.remove(name)\n for pattern in args.skip:\n if fnmatch.fnmatchcase(name, pattern):\n skipped.add(name)\n msg[name] = \"skipped from command line\"\n\n if name in skipped:\n setattr(test, \"setUp\", lambda: test.skipTest(msg.get(name, \"\")))\n\n runner = TextTestRunner(verbosity=verbosity)\n runner.resultclass.checkmode = True\n result = runner.run(suite)\n if result.failures:\n status = 1\n\n return status\n
"},{"location":"api_reference/emmopy/emmopy/","title":"emmopy","text":""},{"location":"api_reference/emmopy/emmopy/#emmopy.emmopy--emmopyemmopy","title":"emmopy.emmopy
","text":"Automagically retrieve the EMMO utilizing ontopy.get_ontology
.
get_emmo(inferred=True)
","text":"Returns the current version of emmo.
Parameters:
Name Type Description Defaultinferred
Optional[bool]
Whether to import the inferred version of emmo or not. Default is True.
True
Returns:
Type DescriptionOntology
The loaded emmo ontology.
Source code inemmopy/emmopy.py
def get_emmo(inferred: Optional[bool] = True) -> \"Ontology\":\n \"\"\"Returns the current version of emmo.\n\n Args:\n inferred: Whether to import the inferred version of emmo or not.\n Default is True.\n\n Returns:\n The loaded emmo ontology.\n\n \"\"\"\n name = \"emmo-inferred\" if inferred in [True, None] else \"emmo\"\n return get_ontology(name).load(prefix_emmo=True)\n
"},{"location":"api_reference/ontopy/colortest/","title":"colortest","text":""},{"location":"api_reference/ontopy/colortest/#ontopy.colortest--ontopycolortest","title":"ontopy.colortest
","text":"Print tests in colors.
Adapted from https://github.com/meshy/colour-runner by Charlie Denton License: MIT
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult","title":" ColourTextTestResult (TestResult)
","text":"A test result class that prints colour formatted text results to a stream.
Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py
Source code inontopy/colortest.py
class ColourTextTestResult(TestResult):\n \"\"\"\n A test result class that prints colour formatted text results to a stream.\n\n Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py\n \"\"\"\n\n formatter = formatters.Terminal256Formatter() # pylint: disable=no-member\n lexer = Lexer()\n separator1 = \"=\" * 70\n separator2 = \"-\" * 70\n indent = \" \" * 4\n # if `checkmode` is true, simplified output will be generated with\n # no traceback\n checkmode = False\n _terminal = Terminal()\n colours = {\n None: str,\n \"error\": _terminal.bold_red,\n \"expected\": _terminal.blue,\n # \"fail\": _terminal.bold_yellow,\n \"fail\": _terminal.bold_magenta,\n \"skip\": str,\n \"success\": _terminal.green,\n \"title\": _terminal.blue,\n \"unexpected\": _terminal.bold_red,\n }\n\n _test_class = None\n\n def __init__(self, stream, descriptions, verbosity):\n super().__init__(stream, descriptions, verbosity)\n self.stream = stream\n self.show_all = verbosity > 1\n self.dots = verbosity == 1\n self.descriptions = descriptions\n\n def getShortDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return self.indent + doc_first_line\n return self.indent + test._testMethodName\n\n def getLongDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return \"\\n\".join((str(test), doc_first_line))\n return str(test)\n\n def getClassDescription(self, test):\n test_class = test.__class__\n doc = test_class.__doc__\n if self.descriptions and doc:\n return doc.split(\"\\n\")[0].strip()\n return strclass(test_class)\n\n def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n\n def printResult(self, short, extended, colour_key=None):\n colour = self.colours[colour_key]\n if self.show_all:\n self.stream.writeln(colour(extended))\n elif self.dots:\n self.stream.write(colour(short))\n self.stream.flush()\n\n def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n\n def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n\n def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n\n def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n\n def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n\n def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n\n def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n\n def printErrorList(self, flavour, errors):\n colour = self.colours[flavour.lower()]\n\n for test, err in errors:\n if self.checkmode and flavour == \"FAIL\":\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {test.shortDescription()}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(str(test))\n if self.show_all:\n self.stream.writeln(self.separator2)\n lines = str(err).split(\"\\n\")\n i = 1\n for line in lines[1:]:\n if line.startswith(\" \"):\n i += 1\n else:\n break\n self.stream.writeln(\n highlight(\n \"\\n\".join(lines[i:]), self.lexer, self.formatter\n )\n )\n else:\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {self.getLongDescription(test)}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(self.separator2)\n self.stream.writeln(highlight(err, self.lexer, self.formatter))\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addError","title":"addError(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addExpectedFailure","title":"addExpectedFailure(self, test, err)
","text":"Called when an expected failure/error occurred.
Source code inontopy/colortest.py
def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addFailure","title":"addFailure(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addSkip","title":"addSkip(self, test, reason)
","text":"Called when a test is skipped.
Source code inontopy/colortest.py
def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addSuccess","title":"addSuccess(self, test)
","text":"Called when a test has completed successfully
Source code inontopy/colortest.py
def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addUnexpectedSuccess","title":"addUnexpectedSuccess(self, test)
","text":"Called when a test was expected to fail, but succeed.
Source code inontopy/colortest.py
def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.printErrors","title":"printErrors(self)
","text":"Called by TestRunner after test run
Source code inontopy/colortest.py
def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.startTest","title":"startTest(self, test)
","text":"Called when the given test is about to be run
Source code inontopy/colortest.py
def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner","title":" ColourTextTestRunner (TextTestRunner)
","text":"A test runner that uses colour in its output.
Source code inontopy/colortest.py
class ColourTextTestRunner(\n TextTestRunner\n): # pylint: disable=too-few-public-methods\n \"\"\"A test runner that uses colour in its output.\"\"\"\n\n resultclass = ColourTextTestResult\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass","title":" resultclass (TestResult)
","text":"A test result class that prints colour formatted text results to a stream.
Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py
Source code inontopy/colortest.py
class ColourTextTestResult(TestResult):\n \"\"\"\n A test result class that prints colour formatted text results to a stream.\n\n Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py\n \"\"\"\n\n formatter = formatters.Terminal256Formatter() # pylint: disable=no-member\n lexer = Lexer()\n separator1 = \"=\" * 70\n separator2 = \"-\" * 70\n indent = \" \" * 4\n # if `checkmode` is true, simplified output will be generated with\n # no traceback\n checkmode = False\n _terminal = Terminal()\n colours = {\n None: str,\n \"error\": _terminal.bold_red,\n \"expected\": _terminal.blue,\n # \"fail\": _terminal.bold_yellow,\n \"fail\": _terminal.bold_magenta,\n \"skip\": str,\n \"success\": _terminal.green,\n \"title\": _terminal.blue,\n \"unexpected\": _terminal.bold_red,\n }\n\n _test_class = None\n\n def __init__(self, stream, descriptions, verbosity):\n super().__init__(stream, descriptions, verbosity)\n self.stream = stream\n self.show_all = verbosity > 1\n self.dots = verbosity == 1\n self.descriptions = descriptions\n\n def getShortDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return self.indent + doc_first_line\n return self.indent + test._testMethodName\n\n def getLongDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return \"\\n\".join((str(test), doc_first_line))\n return str(test)\n\n def getClassDescription(self, test):\n test_class = test.__class__\n doc = test_class.__doc__\n if self.descriptions and doc:\n return doc.split(\"\\n\")[0].strip()\n return strclass(test_class)\n\n def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n\n def printResult(self, short, extended, colour_key=None):\n colour = self.colours[colour_key]\n if self.show_all:\n self.stream.writeln(colour(extended))\n elif self.dots:\n self.stream.write(colour(short))\n self.stream.flush()\n\n def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n\n def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n\n def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n\n def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n\n def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n\n def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n\n def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n\n def printErrorList(self, flavour, errors):\n colour = self.colours[flavour.lower()]\n\n for test, err in errors:\n if self.checkmode and flavour == \"FAIL\":\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {test.shortDescription()}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(str(test))\n if self.show_all:\n self.stream.writeln(self.separator2)\n lines = str(err).split(\"\\n\")\n i = 1\n for line in lines[1:]:\n if line.startswith(\" \"):\n i += 1\n else:\n break\n self.stream.writeln(\n highlight(\n \"\\n\".join(lines[i:]), self.lexer, self.formatter\n )\n )\n else:\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {self.getLongDescription(test)}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(self.separator2)\n self.stream.writeln(highlight(err, self.lexer, self.formatter))\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addError","title":"addError(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addExpectedFailure","title":"addExpectedFailure(self, test, err)
","text":"Called when an expected failure/error occurred.
Source code inontopy/colortest.py
def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addFailure","title":"addFailure(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addSkip","title":"addSkip(self, test, reason)
","text":"Called when a test is skipped.
Source code inontopy/colortest.py
def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addSuccess","title":"addSuccess(self, test)
","text":"Called when a test has completed successfully
Source code inontopy/colortest.py
def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addUnexpectedSuccess","title":"addUnexpectedSuccess(self, test)
","text":"Called when a test was expected to fail, but succeed.
Source code inontopy/colortest.py
def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.printErrors","title":"printErrors(self)
","text":"Called by TestRunner after test run
Source code inontopy/colortest.py
def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.startTest","title":"startTest(self, test)
","text":"Called when the given test is about to be run
Source code inontopy/colortest.py
def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n
"},{"location":"api_reference/ontopy/excelparser/","title":"excelparser","text":"Module from parsing an excelfile and creating an ontology from it.
The excelfile is read by pandas and the pandas dataframe should have column names: prefLabel, altLabel, Elucidation, Comments, Examples, subClassOf, Relations.
Note that correct case is mandatory.
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.ExcelError","title":" ExcelError (EMMOntoPyException)
","text":"Raised on errors in Excel file.
Source code inontopy/excelparser.py
class ExcelError(EMMOntoPyException):\n \"\"\"Raised on errors in Excel file.\"\"\"\n
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.create_ontology_from_excel","title":"create_ontology_from_excel(excelpath, concept_sheet_name='Concepts', metadata_sheet_name='Metadata', imports_sheet_name='ImportedOntologies', dataproperties_sheet_name='DataProperties', objectproperties_sheet_name='ObjectProperties', annotationproperties_sheet_name='AnnotationProperties', base_iri='http://emmo.info/emmo/domain/onto#', base_iri_from_metadata=True, imports=None, catalog=None, force=False, input_ontology=None)
","text":"Creates an ontology from an Excel-file.
Parameters:
Name Type Description Defaultexcelpath
str
Path to Excel workbook.
requiredconcept_sheet_name
str
Name of sheet where concepts are defined. The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel','altLabel', 'Elucidation', 'Comments', 'Examples', 'subClassOf', 'Relations'. Multiple entries are separated with ';'.
'Concepts'
metadata_sheet_name
str
Name of sheet where metadata are defined. The first row contains column names 'Metadata name' and 'Value' Supported 'Metadata names' are: 'Ontology IRI', 'Ontology vesion IRI', 'Ontology version Info', 'Title', 'Abstract', 'License', 'Comment', 'Author', 'Contributor'. Multiple entries are separated with a semi-colon (;
).
'Metadata'
imports_sheet_name
str
Name of sheet where imported ontologies are defined. Column name is 'Imported ontologies'. Fully resolvable URL or path to imported ontologies provided one per row.
'ImportedOntologies'
dataproperties_sheet_name
str
Name of sheet where data properties are defined. The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel','altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf', 'Domain', 'Range', 'dijointWith', 'equivalentTo'.
'DataProperties'
annotationproperties_sheet_name
str
Name of sheet where annotation properties are defined. The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel', 'altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf', 'Domain', 'Range'.
'AnnotationProperties'
objectproperties_sheet_name
str
Name of sheet where object properties are defined.The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel','altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf', 'Domain', 'Range', 'inverseOf', 'dijointWith', 'equivalentTo'.
'ObjectProperties'
base_iri
str
Base IRI of the new ontology.
'http://emmo.info/emmo/domain/onto#'
base_iri_from_metadata
bool
Whether to use base IRI defined from metadata.
True
imports
list
List of imported ontologies.
None
catalog
dict
Imported ontologies with (name, full path) key/value-pairs.
None
force
bool
Forcibly make an ontology by skipping concepts that are erroneously defined or other errors in the excel sheet.
False
input_ontology
Optional[ontopy.ontology.Ontology]
Ontology that should be updated. Default is None, which means that a completely new ontology is generated. If an input_ontology to be updated is provided, the metadata sheet in the excel sheet will not be considered.
None
Returns:
Type DescriptionA tuple with the
a dictionary with lists of concepts that raise errors, with the following keys:
ontopy/excelparser.py
def create_ontology_from_excel( # pylint: disable=too-many-arguments, too-many-locals\n excelpath: str,\n concept_sheet_name: str = \"Concepts\",\n metadata_sheet_name: str = \"Metadata\",\n imports_sheet_name: str = \"ImportedOntologies\",\n dataproperties_sheet_name: str = \"DataProperties\",\n objectproperties_sheet_name: str = \"ObjectProperties\",\n annotationproperties_sheet_name: str = \"AnnotationProperties\",\n base_iri: str = \"http://emmo.info/emmo/domain/onto#\",\n base_iri_from_metadata: bool = True,\n imports: list = None,\n catalog: dict = None,\n force: bool = False,\n input_ontology: Union[ontopy.ontology.Ontology, None] = None,\n) -> Tuple[ontopy.ontology.Ontology, dict, dict]:\n \"\"\"\n Creates an ontology from an Excel-file.\n\n Arguments:\n excelpath: Path to Excel workbook.\n concept_sheet_name: Name of sheet where concepts are defined.\n The second row of this sheet should contain column names that are\n supported. Currently these are 'prefLabel','altLabel',\n 'Elucidation', 'Comments', 'Examples', 'subClassOf', 'Relations'.\n Multiple entries are separated with ';'.\n metadata_sheet_name: Name of sheet where metadata are defined.\n The first row contains column names 'Metadata name' and 'Value'\n Supported 'Metadata names' are: 'Ontology IRI',\n 'Ontology vesion IRI', 'Ontology version Info', 'Title',\n 'Abstract', 'License', 'Comment', 'Author', 'Contributor'.\n Multiple entries are separated with a semi-colon (`;`).\n imports_sheet_name: Name of sheet where imported ontologies are\n defined.\n Column name is 'Imported ontologies'.\n Fully resolvable URL or path to imported ontologies provided one\n per row.\n dataproperties_sheet_name: Name of sheet where data properties are\n defined. The second row of this sheet should contain column names\n that are supported. Currently these are 'prefLabel','altLabel',\n 'Elucidation', 'Comments', 'Examples', 'subPropertyOf',\n 'Domain', 'Range', 'dijointWith', 'equivalentTo'.\n annotationproperties_sheet_name: Name of sheet where annotation\n properties are defined. The second row of this sheet should contain\n column names that are supported. Currently these are 'prefLabel',\n 'altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf',\n 'Domain', 'Range'.\n objectproperties_sheet_name: Name of sheet where object properties are\n defined.The second row of this sheet should contain column names\n that are supported. Currently these are 'prefLabel','altLabel',\n 'Elucidation', 'Comments', 'Examples', 'subPropertyOf',\n 'Domain', 'Range', 'inverseOf', 'dijointWith', 'equivalentTo'.\n base_iri: Base IRI of the new ontology.\n base_iri_from_metadata: Whether to use base IRI defined from metadata.\n imports: List of imported ontologies.\n catalog: Imported ontologies with (name, full path) key/value-pairs.\n force: Forcibly make an ontology by skipping concepts\n that are erroneously defined or other errors in the excel sheet.\n input_ontology: Ontology that should be updated.\n Default is None,\n which means that a completely new ontology is generated.\n If an input_ontology to be updated is provided,\n the metadata sheet in the excel sheet will not be considered.\n\n\n Returns:\n A tuple with the:\n\n * created ontology\n * associated catalog of ontology names and resolvable path as dict\n * a dictionary with lists of concepts that raise errors, with the\n following keys:\n\n - \"already_defined\": These are concepts (classes)\n that are already in the\n ontology, because they were already added in a\n previous line of the excelfile/pandas dataframe, or because\n it is already defined in an imported ontology with the same\n base_iri as the newly created ontology.\n - \"in_imported_ontologies\": Concepts (classes)\n that are defined in the\n excel, but already exist in the imported ontologies.\n - \"wrongly_defined\": Concepts (classes) that are given an\n invalid prefLabel (e.g. with a space in the name).\n - \"missing_subClassOf\": Concepts (classes) that are missing\n parents. These concepts are added directly under owl:Thing.\n - \"invalid_subClassOf\": Concepts (classes) with invalidly\n defined parents.\n These concepts are added directly under owl:Thing.\n - \"nonadded_concepts\": List of all concepts (classes) that are\n not added,\n either because the prefLabel is invalid, or because the\n concept has already been added once or already exists in an\n imported ontology.\n - \"obj_prop_already_defined\": Object properties that are already\n defined in the ontology.\n - \"obj_prop_in_imported_ontologies\": Object properties that are\n defined in the excel, but already exist in the imported\n ontologies.\n - \"obj_prop_wrongly_defined\": Object properties that are given\n an invalid prefLabel (e.g. with a space in the name).\n - \"obj_prop_missing_subPropertyOf\": Object properties that are\n missing parents.\n - \"obj_prop_invalid_subPropertyOf\": Object properties with\n invalidly defined parents.\n - \"obj_prop_nonadded_entities\": List of all object properties\n that are not added, either because the prefLabel is invalid,\n or because the concept has already been added once or\n already exists in an imported ontology.\n - \"obj_prop_errors_in_properties\": Object properties with\n invalidly defined properties.\n - \"obj_prop_errors_in_range\": Object properties with invalidly\n defined range.\n - \"obj_prop_errors_in_domain\": Object properties with invalidly\n defined domain.\n - \"annot_prop_already_defined\": Annotation properties that are\n already defined in the ontology.\n - \"annot_prop_in_imported_ontologies\": Annotation properties\n that\n are defined in the excel, but already exist in the imported\n ontologies.\n - \"annot_prop_wrongly_defined\": Annotation properties that are\n given an invalid prefLabel (e.g. with a space in the name).\n - \"annot_prop_missing_subPropertyOf\": Annotation properties that\n are missing parents.\n - \"annot_prop_invalid_subPropertyOf\": Annotation properties with\n invalidly defined parents.\n - \"annot_prop_nonadded_entities\": List of all annotation\n properties that are not added, either because the prefLabel\n is invalid, or because the concept has already been added\n once or already exists in an imported ontology.\n - \"annot_prop_errors_in_properties\": Annotation properties with\n invalidly defined properties.\n - \"data_prop_already_defined\": Data properties that are already\n defined in the ontology.\n - \"data_prop_in_imported_ontologies\": Data properties that are\n defined in the excel, but already exist in the imported\n ontologies.\n - \"data_prop_wrongly_defined\": Data properties that are given\n an invalid prefLabel (e.g. with a space in the name).\n - \"data_prop_missing_subPropertyOf\": Data properties that are\n missing parents.\n - \"data_prop_invalid_subPropertyOf\": Data properties with\n invalidly defined parents.\n - \"data_prop_nonadded_entities\": List of all data properties\n that are not added, either because the prefLabel is invalid,\n or because the concept has already been added once or\n already exists in an imported ontology.\n - \"data_prop_errors_in_properties\": Data properties with\n invalidly defined properties.\n - \"data_prop_errors_in_range\": Data properties with invalidly\n defined range.\n - \"data_prop_errors_in_domain\": Data properties with invalidly\n defined domain.\n\n \"\"\"\n web_protocol = \"http://\", \"https://\", \"ftp://\"\n\n def _relative_to_absolute_paths(path):\n if isinstance(path, str):\n if not path.startswith(web_protocol):\n path = os.path.dirname(excelpath) + \"/\" + str(path)\n return path\n\n try:\n imports = pd.read_excel(\n excelpath, sheet_name=imports_sheet_name, skiprows=[1]\n )\n except ValueError:\n imports = pd.DataFrame()\n else:\n # Strip leading and trailing white spaces in paths\n imports.replace(r\"^\\s+\", \"\", regex=True).replace(\n r\"\\s+$\", \"\", regex=True\n )\n # Set empty strings to nan\n imports = imports.replace(r\"^\\s*$\", np.nan, regex=True)\n if \"Imported ontologies\" in imports.columns:\n imports[\"Imported ontologies\"] = imports[\n \"Imported ontologies\"\n ].apply(_relative_to_absolute_paths)\n\n # Read datafile TODO: Some magic to identify the header row\n conceptdata = pd.read_excel(\n excelpath, sheet_name=concept_sheet_name, skiprows=[0, 2]\n )\n try:\n objectproperties = pd.read_excel(\n excelpath, sheet_name=objectproperties_sheet_name, skiprows=[0, 2]\n )\n if \"prefLabel\" not in objectproperties.columns:\n warnings.warn(\n \"The 'prefLabel' column is missing in \"\n f\"{objectproperties_sheet_name}. \"\n \"New object properties will not be added to the ontology.\"\n )\n objectproperties = None\n except ValueError:\n warnings.warn(\n f\"No sheet named {objectproperties_sheet_name} found \"\n f\"in {excelpath}. \"\n \"New object properties will not be added to the ontology.\"\n )\n objectproperties = None\n try:\n annotationproperties = pd.read_excel(\n excelpath,\n sheet_name=annotationproperties_sheet_name,\n skiprows=[0, 2],\n )\n if \"prefLabel\" not in annotationproperties.columns:\n warnings.warn(\n \"The 'prefLabel' column is missing in \"\n f\"{annotationproperties_sheet_name}. \"\n \"New annotation properties will not be added to the ontology.\"\n )\n annotationproperties = None\n except ValueError:\n warnings.warn(\n f\"No sheet named {annotationproperties_sheet_name} \"\n f\"found in {excelpath}. \"\n \"New annotation properties will not be added to the ontology.\"\n )\n annotationproperties = None\n\n try:\n dataproperties = pd.read_excel(\n excelpath, sheet_name=dataproperties_sheet_name, skiprows=[0, 2]\n )\n if \"prefLabel\" not in dataproperties.columns:\n warnings.warn(\n \"The 'prefLabel' column is missing in \"\n f\"{dataproperties_sheet_name}. \"\n \"New data properties will not be added to the ontology.\"\n )\n dataproperties = None\n except ValueError:\n warnings.warn(\n f\"No sheet named {dataproperties_sheet_name} found in {excelpath}. \"\n \"New data properties will not be added to the ontology.\"\n )\n dataproperties = None\n\n metadata = pd.read_excel(excelpath, sheet_name=metadata_sheet_name)\n return create_ontology_from_pandas(\n data=conceptdata,\n objectproperties=objectproperties,\n dataproperties=dataproperties,\n annotationproperties=annotationproperties,\n metadata=metadata,\n imports=imports,\n base_iri=base_iri,\n base_iri_from_metadata=base_iri_from_metadata,\n catalog=catalog,\n force=force,\n input_ontology=input_ontology,\n )\n
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.create_ontology_from_pandas","title":"create_ontology_from_pandas(data, objectproperties, annotationproperties, dataproperties, metadata, imports, base_iri='http://emmo.info/emmo/domain/onto#', base_iri_from_metadata=True, catalog=None, force=False, input_ontology=None)
","text":"Create an ontology from a pandas DataFrame.
Check 'create_ontology_from_excel' for complete documentation.
Source code inontopy/excelparser.py
def create_ontology_from_pandas( # pylint:disable=too-many-locals,too-many-branches,too-many-statements,too-many-arguments\n data: pd.DataFrame,\n objectproperties: pd.DataFrame,\n annotationproperties: pd.DataFrame,\n dataproperties: pd.DataFrame,\n metadata: pd.DataFrame,\n imports: pd.DataFrame,\n base_iri: str = \"http://emmo.info/emmo/domain/onto#\",\n base_iri_from_metadata: bool = True,\n catalog: dict = None,\n force: bool = False,\n input_ontology: Union[ontopy.ontology.Ontology, None] = None,\n) -> Tuple[ontopy.ontology.Ontology, dict]:\n \"\"\"\n Create an ontology from a pandas DataFrame.\n\n Check 'create_ontology_from_excel' for complete documentation.\n \"\"\"\n # Get ontology to which new concepts should be added\n if input_ontology:\n onto = input_ontology\n catalog = {}\n else: # Create new ontology\n onto, catalog = get_metadata_from_dataframe(\n metadata, base_iri, imports=imports\n )\n\n # Set given or default base_iri if base_iri_from_metadata is False.\n if not base_iri_from_metadata:\n onto.base_iri = base_iri\n # onto.sync_python_names()\n # prefLabel, label, and altLabel\n # are default label annotations\n onto.set_default_label_annotations()\n # Add object properties\n if objectproperties is not None:\n objectproperties = _clean_dataframe(objectproperties)\n (\n onto,\n objectproperties_with_errors,\n added_objprop_indices,\n ) = _add_entities(\n onto=onto,\n data=objectproperties,\n entitytype=owlready2.ObjectPropertyClass,\n force=force,\n )\n\n if annotationproperties is not None:\n annotationproperties = _clean_dataframe(annotationproperties)\n (\n onto,\n annotationproperties_with_errors,\n added_annotprop_indices,\n ) = _add_entities(\n onto=onto,\n data=annotationproperties,\n entitytype=owlready2.AnnotationPropertyClass,\n force=force,\n )\n\n if dataproperties is not None:\n dataproperties = _clean_dataframe(dataproperties)\n (\n onto,\n dataproperties_with_errors,\n added_dataprop_indices,\n ) = _add_entities(\n onto=onto,\n data=dataproperties,\n entitytype=owlready2.DataPropertyClass,\n force=force,\n )\n onto.sync_attributes(\n name_policy=\"uuid\", name_prefix=\"EMMO_\", class_docstring=\"elucidation\"\n )\n # Clean up data frame with new concepts\n data = _clean_dataframe(data)\n # Add entities\n onto, entities_with_errors, added_concept_indices = _add_entities(\n onto=onto, data=data, entitytype=owlready2.ThingClass, force=force\n )\n\n # Add entity properties in a second loop\n for index in added_concept_indices:\n row = data.loc[index]\n properties = row[\"Relations\"]\n if properties == \"nan\":\n properties = None\n if isinstance(properties, str):\n try:\n entity = onto.get_by_label(row[\"prefLabel\"].strip())\n except NoSuchLabelError:\n pass\n props = properties.split(\";\")\n for prop in props:\n try:\n entity.is_a.append(evaluate(onto, prop.strip()))\n except pyparsing.ParseException as exc:\n warnings.warn(\n # This is currently not tested\n f\"Error in Property assignment for: '{entity}'. \"\n f\"Property to be Evaluated: '{prop}'. \"\n f\"{exc}\"\n )\n entities_with_errors[\"errors_in_properties\"].append(\n entity.name\n )\n except NoSuchLabelError as exc:\n msg = (\n f\"Error in Property assignment for: {entity}. \"\n f\"Property to be Evaluated: {prop}. \"\n f\"{exc}\"\n )\n if force is True:\n warnings.warn(msg)\n entities_with_errors[\"errors_in_properties\"].append(\n entity.name\n )\n else:\n raise ExcelError(msg) from exc\n\n # Add range and domain for object properties\n if objectproperties is not None:\n onto, objectproperties_with_errors = _add_range_domain(\n onto=onto,\n properties=objectproperties,\n added_prop_indices=added_objprop_indices,\n properties_with_errors=objectproperties_with_errors,\n force=force,\n )\n for key, value in objectproperties_with_errors.items():\n entities_with_errors[\"obj_prop_\" + key] = value\n # Add range and domain for annotation properties\n if annotationproperties is not None:\n onto, annotationproperties_with_errors = _add_range_domain(\n onto=onto,\n properties=annotationproperties,\n added_prop_indices=added_annotprop_indices,\n properties_with_errors=annotationproperties_with_errors,\n force=force,\n )\n for key, value in annotationproperties_with_errors.items():\n entities_with_errors[\"annot_prop_\" + key] = value\n\n # Add range and domain for data properties\n if dataproperties is not None:\n onto, dataproperties_with_errors = _add_range_domain(\n onto=onto,\n properties=dataproperties,\n added_prop_indices=added_dataprop_indices,\n properties_with_errors=dataproperties_with_errors,\n force=force,\n )\n for key, value in dataproperties_with_errors.items():\n entities_with_errors[\"data_prop_\" + key] = value\n\n # Synchronise Python attributes to ontology\n onto.sync_attributes(\n name_policy=\"uuid\", name_prefix=\"EMMO_\", class_docstring=\"elucidation\"\n )\n onto.dir_label = False\n entities_with_errors = {\n key: set(value) for key, value in entities_with_errors.items()\n }\n return onto, catalog, entities_with_errors\n
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.get_metadata_from_dataframe","title":"get_metadata_from_dataframe(metadata, base_iri, base_iri_from_metadata=True, imports=None, catalog=None)
","text":"Create ontology with metadata from pd.DataFrame
Source code inontopy/excelparser.py
def get_metadata_from_dataframe( # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n metadata: pd.DataFrame,\n base_iri: str,\n base_iri_from_metadata: bool = True,\n imports: pd.DataFrame = None,\n catalog: dict = None,\n) -> Tuple[ontopy.ontology.Ontology, dict]:\n \"\"\"Create ontology with metadata from pd.DataFrame\"\"\"\n\n # base_iri from metadata if it exists and base_iri_from_metadata\n if base_iri_from_metadata:\n try:\n base_iris = _parse_literal(metadata, \"Ontology IRI\", metadata=True)\n if len(base_iris) > 1:\n warnings.warn(\n \"More than one Ontology IRI given. The first was chosen.\"\n )\n base_iri = base_iris[0] + \"#\"\n except (TypeError, ValueError, AttributeError, IndexError):\n pass\n\n # Create new ontology\n onto = get_ontology(base_iri)\n\n # Add imported ontologies\n catalog = {} if catalog is None else catalog\n locations = set()\n for _, row in imports.iterrows():\n # for location in imports:\n location = row[\"Imported ontologies\"]\n if not pd.isna(location) and location not in locations:\n imported = onto.world.get_ontology(location).load()\n onto.imported_ontologies.append(imported)\n catalog[imported.base_iri.rstrip(\"#/\")] = location\n try:\n cat = read_catalog(location.rsplit(\"/\", 1)[0])\n catalog.update(cat)\n except ReadCatalogError:\n warnings.warn(f\"Catalog for {imported} not found.\")\n locations.add(location)\n # set defined prefix\n if not pd.isna(row[\"prefix\"]):\n # set prefix for all ontologies with same 'base_iri_root'\n if not pd.isna(row[\"base_iri_root\"]):\n onto.set_common_prefix(\n iri_base=row[\"base_iri_root\"], prefix=row[\"prefix\"]\n )\n # If base_root not given, set prefix only to top ontology\n else:\n imported.prefix = row[\"prefix\"]\n\n with onto:\n # Add title\n try:\n _add_literal(\n metadata,\n onto.metadata.title,\n \"Title\",\n metadata=True,\n only_one=True,\n )\n except AttributeError:\n pass\n\n # Add license\n try:\n _add_literal(\n metadata, onto.metadata.license, \"License\", metadata=True\n )\n except AttributeError:\n pass\n\n # Add authors/creators\n try:\n _add_literal(\n metadata, onto.metadata.creator, \"Author\", metadata=True\n )\n except AttributeError:\n pass\n\n # Add contributors\n try:\n _add_literal(\n metadata,\n onto.metadata.contributor,\n \"Contributor\",\n metadata=True,\n )\n except AttributeError:\n pass\n\n # Add versionInfo\n try:\n _add_literal(\n metadata,\n onto.metadata.versionInfo,\n \"Ontology version Info\",\n metadata=True,\n only_one=True,\n )\n except AttributeError:\n pass\n return onto, catalog\n
"},{"location":"api_reference/ontopy/graph/","title":"graph","text":"A module for visualising ontologies using graphviz.
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph","title":" OntoGraph
","text":"Class for visualising an ontology.
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph--parameters","title":"Parameters","text":"ontology : ontopy.Ontology instance Ontology to visualize. root : None | graph.ALL | string | owlready2.ThingClass instance Name or owlready2 entity of root node to plot subgraph below. If root
is graph.ALL
, all classes will be included in the subgraph. leaves : None | sequence A sequence of leaf node names for generating sub-graphs. entities : None | sequence A sequence of entities to add to the graph. relations : \"all\" | str | None | sequence Sequence of relations to visualise. If \"all\", means to include all relations. style : None | dict | \"default\" A dict mapping the name of the different graphical elements to dicts of dot graph attributes. Supported graphical elements include: - graphtype : \"Digraph\" | \"Graph\" - graph : graph attributes (G) - class : nodes for classes (N) - root : additional attributes for root nodes (N) - leaf : additional attributes for leaf nodes (N) - defined_class : nodes for defined classes (N) - class_construct : nodes for class constructs (N) - individual : nodes for invididuals (N) - object_property : nodes for object properties (N) - data_property : nodes for data properties (N) - annotation_property : nodes for annotation properties (N) - added_node : nodes added because addnodes
is true (N) - isA : edges for isA relations (E) - not : edges for not class constructs (E) - equivalent_to : edges for equivalent_to relations (E) - disjoint_with : edges for disjoint_with relations (E) - inverse_of : edges for inverse_of relations (E) - default_relation : default edges relations and restrictions (E) - relations : dict of styles for different relations (E) - inverse : default edges for inverse relations (E) - default_dataprop : default edges for data properties (E) - nodes : attribute for individual nodes (N) - edges : attribute for individual edges (E) If style is None or \"default\", the default style is used. See https://www.graphviz.org/doc/info/attrs.html edgelabels : None | bool | dict Whether to add labels to the edges of the generated graph. It is also possible to provide a dict mapping the full labels (with cardinality stripped off for restrictions) to some abbreviations. addnodes : bool Whether to add missing target nodes in relations. addconstructs : bool Whether to add nodes representing class constructs. included_namespaces : sequence In combination with root
, only include classes with one of the listed namespaces. If empty (the default), nothing is excluded. included_ontologies : sequence In combination with root
, only include classes defined in one of the listed ontologies. If empty (default), nothing is excluded. parents : int Include parents
levels of parents. excluded_nodes : None | sequence Sequence of labels of nodes to exclude. graph : None | pydot.Dot instance Graphviz Digraph object to plot into. If None, a new graph object is created using the keyword arguments. imported : bool Whether to include imported classes if entities
is None. kwargs : Passed to graphviz.Digraph.
ontopy/graph.py
class OntoGraph: # pylint: disable=too-many-instance-attributes\n \"\"\"Class for visualising an ontology.\n\n Parameters\n ----------\n ontology : ontopy.Ontology instance\n Ontology to visualize.\n root : None | graph.ALL | string | owlready2.ThingClass instance\n Name or owlready2 entity of root node to plot subgraph\n below. If `root` is `graph.ALL`, all classes will be included\n in the subgraph.\n leaves : None | sequence\n A sequence of leaf node names for generating sub-graphs.\n entities : None | sequence\n A sequence of entities to add to the graph.\n relations : \"all\" | str | None | sequence\n Sequence of relations to visualise. If \"all\", means to include\n all relations.\n style : None | dict | \"default\"\n A dict mapping the name of the different graphical elements\n to dicts of dot graph attributes. Supported graphical elements\n include:\n - graphtype : \"Digraph\" | \"Graph\"\n - graph : graph attributes (G)\n - class : nodes for classes (N)\n - root : additional attributes for root nodes (N)\n - leaf : additional attributes for leaf nodes (N)\n - defined_class : nodes for defined classes (N)\n - class_construct : nodes for class constructs (N)\n - individual : nodes for invididuals (N)\n - object_property : nodes for object properties (N)\n - data_property : nodes for data properties (N)\n - annotation_property : nodes for annotation properties (N)\n - added_node : nodes added because `addnodes` is true (N)\n - isA : edges for isA relations (E)\n - not : edges for not class constructs (E)\n - equivalent_to : edges for equivalent_to relations (E)\n - disjoint_with : edges for disjoint_with relations (E)\n - inverse_of : edges for inverse_of relations (E)\n - default_relation : default edges relations and restrictions (E)\n - relations : dict of styles for different relations (E)\n - inverse : default edges for inverse relations (E)\n - default_dataprop : default edges for data properties (E)\n - nodes : attribute for individual nodes (N)\n - edges : attribute for individual edges (E)\n If style is None or \"default\", the default style is used.\n See https://www.graphviz.org/doc/info/attrs.html\n edgelabels : None | bool | dict\n Whether to add labels to the edges of the generated graph.\n It is also possible to provide a dict mapping the\n full labels (with cardinality stripped off for restrictions)\n to some abbreviations.\n addnodes : bool\n Whether to add missing target nodes in relations.\n addconstructs : bool\n Whether to add nodes representing class constructs.\n included_namespaces : sequence\n In combination with `root`, only include classes with one of\n the listed namespaces. If empty (the default), nothing is\n excluded.\n included_ontologies : sequence\n In combination with `root`, only include classes defined in\n one of the listed ontologies. If empty (default), nothing is\n excluded.\n parents : int\n Include `parents` levels of parents.\n excluded_nodes : None | sequence\n Sequence of labels of nodes to exclude.\n graph : None | pydot.Dot instance\n Graphviz Digraph object to plot into. If None, a new graph object\n is created using the keyword arguments.\n imported : bool\n Whether to include imported classes if `entities` is None.\n kwargs :\n Passed to graphviz.Digraph.\n \"\"\"\n\n def __init__( # pylint: disable=too-many-arguments,too-many-locals\n self,\n ontology,\n root=None,\n leaves=None,\n entities=None,\n relations=\"isA\",\n style=None,\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n included_namespaces=(),\n included_ontologies=(),\n parents=0,\n excluded_nodes=None,\n graph=None,\n imported=False,\n **kwargs,\n ):\n if style is None or style == \"default\":\n style = _default_style\n\n if graph is None:\n graphtype = style.get(\"graphtype\", \"Digraph\")\n dotcls = getattr(graphviz, graphtype)\n graph_attr = kwargs.pop(\"graph_attr\", {})\n for key, value in style.get(\"graph\", {}).items():\n graph_attr.setdefault(key, value)\n self.dot = dotcls(graph_attr=graph_attr, **kwargs)\n self.nodes = set()\n self.edges = set()\n else:\n if ontology != graph.ontology:\n raise ValueError(\n \"the same ontology must be used when extending a graph\"\n )\n self.dot = graph.dot.copy()\n self.nodes = graph.nodes.copy()\n self.edges = graph.edges.copy()\n\n self.ontology = ontology\n self.relations = set(\n [relations] if isinstance(relations, str) else relations\n )\n self.style = style\n self.edgelabels = edgelabels\n self.addnodes = addnodes\n self.addconstructs = addconstructs\n self.excluded_nodes = set(excluded_nodes) if excluded_nodes else set()\n self.imported = imported\n\n if root == ALL:\n self.add_entities(\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n )\n elif root:\n self.add_branch(\n root,\n leaves,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n if parents:\n self.add_parents(\n root,\n levels=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n )\n\n if entities:\n self.add_entities(\n entities=entities,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n )\n\n def add_entities( # pylint: disable=too-many-arguments\n self,\n entities=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n nodeattrs=None,\n **attrs,\n ):\n \"\"\"Adds a sequence of entities to the graph. If `entities` is None,\n all classes are added to the graph.\n\n `nodeattrs` is a dict mapping node names to are attributes for\n dedicated nodes.\n \"\"\"\n if entities is None:\n entities = self.ontology.classes(imported=self.imported)\n self.add_nodes(entities, nodeattrs=nodeattrs, **attrs)\n self.add_edges(\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n\n def add_branch( # pylint: disable=too-many-arguments,too-many-locals\n self,\n root,\n leaves=None,\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n included_namespaces=(),\n included_ontologies=(),\n include_parents=\"closest\",\n **attrs,\n ):\n \"\"\"Adds branch under `root` ending at any entity included in the\n sequence `leaves`. If `include_leaves` is true, leaf classes are\n also included.\"\"\"\n if leaves is None:\n leaves = ()\n classes = self.ontology.get_branch(\n root=root,\n leaves=leaves,\n include_leaves=include_leaves,\n strict_leaves=strict_leaves,\n exclude=exclude,\n )\n\n classes = filter_classes(\n classes,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n nodeattrs = {}\n nodeattrs[get_label(root)] = self.style.get(\"root\", {})\n for leaf in leaves:\n nodeattrs[get_label(leaf)] = self.style.get(\"leaf\", {})\n\n self.add_entities(\n entities=classes,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n closest_ancestors = False\n ancestor_generations = None\n if include_parents == \"closest\":\n closest_ancestors = True\n elif isinstance(include_parents, int):\n ancestor_generations = include_parents\n parents = self.ontology.get_ancestors(\n classes,\n closest=closest_ancestors,\n generations=ancestor_generations,\n strict=True,\n )\n if parents:\n for parent in parents:\n nodeattrs[get_label(parent)] = self.style.get(\"parent_node\", {})\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n\n def add_parents( # pylint: disable=too-many-arguments\n self,\n name,\n levels=1,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n **attrs,\n ):\n \"\"\"Add `levels` levels of strict parents of entity `name`.\"\"\"\n\n def addparents(entity, nodes, parents):\n if nodes > 0:\n for parent in entity.get_parents(strict=True):\n parents.add(parent)\n addparents(parent, nodes - 1, parents)\n\n entity = self.ontology[name] if isinstance(name, str) else name\n parents = set()\n addparents(entity, levels, parents)\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n\n def add_node(self, name, nodeattrs=None, **attrs):\n \"\"\"Add node with given name. `attrs` are graphviz node attributes.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes.union(self.excluded_nodes):\n kwargs = self.get_node_attrs(\n entity, nodeattrs=nodeattrs, attrs=attrs\n )\n if hasattr(entity, \"iri\"):\n kwargs.setdefault(\"URL\", entity.iri)\n self.dot.node(label, label=label, **kwargs)\n self.nodes.add(label)\n\n def add_nodes(self, names, nodeattrs, **attrs):\n \"\"\"Add nodes with given names. `attrs` are graphviz node attributes.\"\"\"\n for name in names:\n self.add_node(name, nodeattrs=nodeattrs, **attrs)\n\n def add_edge(self, subject, predicate, obj, edgelabel=None, **attrs):\n \"\"\"Add edge corresponding for ``(subject, predicate, object)``\n triplet.\"\"\"\n subject = subject if isinstance(subject, str) else get_label(subject)\n predicate = (\n predicate if isinstance(predicate, str) else get_label(predicate)\n )\n obj = obj if isinstance(obj, str) else get_label(obj)\n if subject in self.excluded_nodes or obj in self.excluded_nodes:\n return\n if not isinstance(subject, str) or not isinstance(obj, str):\n raise TypeError(\"`subject` and `object` must be strings\")\n if subject not in self.nodes:\n raise RuntimeError(f'`subject` \"{subject}\" must have been added')\n if obj not in self.nodes:\n raise RuntimeError(f'`object` \"{obj}\" must have been added')\n key = (subject, predicate, obj)\n if key not in self.edges:\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n if (edgelabel is None) and (\n (predicate in rels) or (predicate == \"isA\")\n ):\n edgelabel = self.edgelabels\n label = None\n if edgelabel is None:\n tokens = predicate.split()\n if len(tokens) == 2 and tokens[1] in (\"some\", \"only\"):\n label = f\"{tokens[0]} {tokens[1]}\"\n elif len(tokens) == 3 and tokens[1] in (\n \"exactly\",\n \"min\",\n \"max\",\n ):\n label = f\"{tokens[0]} {tokens[1]} {tokens[2]}\"\n elif isinstance(edgelabel, str):\n label = edgelabel\n elif isinstance(edgelabel, dict):\n label = edgelabel.get(predicate, predicate)\n elif edgelabel:\n label = predicate\n kwargs = self.get_edge_attrs(predicate, attrs=attrs)\n self.dot.edge(subject, obj, label=label, **kwargs)\n self.edges.add(key)\n\n def add_source_edges( # pylint: disable=too-many-arguments,too-many-branches\n self,\n source,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n ):\n \"\"\"Adds all relations originating from entity `source` who's type\n are listed in `relations`.\"\"\"\n if relations is None:\n relations = self.relations\n elif isinstance(relations, str):\n relations = set([relations])\n else:\n relations = set(relations)\n\n edgelabels = self.edgelabels if edgelabels is None else edgelabels\n addconstructs = (\n self.addconstructs if addconstructs is None else addconstructs\n )\n\n entity = self.ontology[source] if isinstance(source, str) else source\n label = get_label(entity)\n for relation in entity.is_a:\n # isA\n if isinstance(\n relation, (owlready2.ThingClass, owlready2.ObjectPropertyClass)\n ):\n if \"all\" in relations or \"isA\" in relations:\n rlabel = get_label(relation)\n # FIXME - we actually want to include individuals...\n if isinstance(entity, owlready2.Thing):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n self.add_edge(\n subject=label,\n predicate=\"isA\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n\n # restriction\n elif isinstance(relation, owlready2.Restriction):\n rname = get_label(relation.property)\n if \"all\" in relations or rname in relations:\n rlabel = f\"{rname} {typenames[relation.type]}\"\n if isinstance(relation.value, owlready2.ThingClass):\n obj = get_label(relation.value)\n if not self.add_missing_node(relation.value, addnodes):\n continue\n elif (\n isinstance(relation.value, owlready2.ClassConstruct)\n and self.addconstructs\n ):\n obj = self.add_class_construct(relation.value)\n else:\n continue\n pred = asstring(\n relation, exclude_object=True, ontology=self.ontology\n )\n self.add_edge(\n label, pred, obj, edgelabel=edgelabels, **attrs\n )\n\n # inverse\n if isinstance(relation, owlready2.Inverse):\n if \"all\" in relations or \"inverse\" in relations:\n rlabel = get_label(relation)\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n self.add_edge(\n subject=label,\n predicate=\"inverse\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n\n def add_edges( # pylint: disable=too-many-arguments\n self,\n sources=None,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n ):\n \"\"\"Adds all relations originating from entities `sources` who's type\n are listed in `relations`. If `sources` is None, edges are added\n between all current nodes.\"\"\"\n if sources is None:\n sources = self.nodes\n for source in sources.copy():\n self.add_source_edges(\n source,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n\n def add_missing_node(self, name, addnodes=None):\n \"\"\"Checks if `name` corresponds to a missing node and add it if\n `addnodes` is true.\n\n Returns true if the node exists or is added, false otherwise.\"\"\"\n addnodes = self.addnodes if addnodes is None else addnodes\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes:\n if addnodes:\n self.add_node(entity, **self.style.get(\"added_node\", {}))\n else:\n return False\n return True\n\n def add_class_construct(self, construct):\n \"\"\"Adds class construct and return its label.\"\"\"\n self.add_node(construct, **self.style.get(\"class_construct\", {}))\n label = get_label(construct)\n if isinstance(construct, owlready2.Or):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(get_label(cls), \"isA\", label)\n elif isinstance(construct, owlready2.And):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(label, \"isA\", get_label(cls))\n elif isinstance(construct, owlready2.Not):\n clslabel = get_label(construct.Class)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(construct.Class)\n if clslabel in self.nodes:\n self.add_edge(clslabel, \"not\", label)\n # Neither and nor inverse constructs are\n return label\n\n def get_node_attrs(self, name, nodeattrs, attrs):\n \"\"\"Returns attributes for node or edge `name`. `attrs` overrides\n the default style.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n # class\n if isinstance(entity, owlready2.ThingClass):\n if entity.is_defined:\n kwargs = self.style.get(\"defined_class\", {})\n else:\n kwargs = self.style.get(\"class\", {})\n # class construct\n elif isinstance(entity, owlready2.ClassConstruct):\n kwargs = self.style.get(\"class_construct\", {})\n # individual\n elif isinstance(entity, owlready2.Thing):\n kwargs = self.style.get(\"individual\", {})\n # object property\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n kwargs = self.style.get(\"object_property\", {})\n # data property\n elif isinstance(entity, owlready2.DataPropertyClass):\n kwargs = self.style.get(\"data_property\", {})\n # annotation property\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n kwargs = self.style.get(\"annotation_property\", {})\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs = kwargs.copy()\n kwargs.update(self.style.get(\"nodes\", {}).get(label, {}))\n if nodeattrs:\n kwargs.update(nodeattrs.get(label, {}))\n kwargs.update(attrs)\n return kwargs\n\n def _relation_styles(\n self, entity: ThingClass, relations: dict, rels: set\n ) -> dict:\n \"\"\"Helper function that returns the styles of the relations\n to be used.\n\n Parameters:\n entity: the entity of the parent relation\n relations: relations with default styles\n rels: relations to be considered that have default styles,\n either for the prefLabel or one of the altLabels\n \"\"\"\n for relation in entity.mro():\n if relation in rels:\n if str(get_label(relation)) in relations:\n rattrs = relations[str(get_label(relation))]\n else:\n for alt_label in relation.get_annotations()[\"altLabel\"]:\n rattrs = relations[str(alt_label)]\n\n break\n else:\n warnings.warn(\n f\"Style not defined for relation {get_label(entity)}. \"\n \"Resorting to default style.\"\n )\n rattrs = self.style.get(\"default_relation\", {})\n return rattrs\n\n def get_edge_attrs(self, predicate: str, attrs: dict) -> dict:\n \"\"\"Returns attributes for node or edge `predicate`. `attrs` overrides\n the default style.\n\n Parameters:\n predicate: predicate to get attributes for\n attrs: desired attributes to override default\n \"\"\"\n # given type\n types = (\"isA\", \"equivalent_to\", \"disjoint_with\", \"inverse_of\")\n if predicate in types:\n kwargs = self.style.get(predicate, {}).copy()\n else:\n kwargs = {}\n name = predicate.split(None, 1)[0]\n match = re.match(r\"Inverse\\((.*)\\)\", name)\n if match:\n (name,) = match.groups()\n attrs = attrs.copy()\n for key, value in self.style.get(\"inverse\", {}).items():\n attrs.setdefault(key, value)\n if not isinstance(name, str) or name in self.ontology:\n entity = self.ontology[name] if isinstance(name, str) else name\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n rattrs = self._relation_styles(entity, relations, rels)\n\n # object property\n if isinstance(\n entity,\n (owlready2.ObjectPropertyClass, owlready2.ObjectProperty),\n ):\n kwargs = self.style.get(\"default_relation\", {}).copy()\n kwargs.update(rattrs)\n # data property\n elif isinstance(\n entity,\n (owlready2.DataPropertyClass, owlready2.DataProperty),\n ):\n kwargs = self.style.get(\"default_dataprop\", {}).copy()\n kwargs.update(rattrs)\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs.update(self.style.get(\"edges\", {}).get(predicate, {}))\n kwargs.update(attrs)\n return kwargs\n\n def add_legend(self, relations=None):\n \"\"\"Adds legend for specified relations to the graph.\n\n If `relations` is \"all\", the legend will contain all relations\n that are defined in the style. By default the legend will\n only contain relations that are currently included in the\n graph.\n\n Hence, you usually want to call add_legend() as the last method\n before saving or displaying.\n\n Relations with defined style will be bold in legend.\n Relations that have inherited style from parent relation\n will not be bold.\n \"\"\"\n rels = self.style.get(\"relations\", {})\n if relations is None:\n relations = self.get_relations(sort=True)\n elif relations == \"all\":\n relations = [\"isA\"] + list(rels.keys()) + [\"inverse\"]\n elif isinstance(relations, str):\n relations = relations.split(\",\")\n\n nrelations = len(relations)\n if nrelations == 0:\n return\n\n table = (\n '<<table border=\"0\" cellpadding=\"2\" cellspacing=\"0\" cellborder=\"0\">'\n )\n label1 = [table]\n label2 = [table]\n for index, relation in enumerate(relations):\n if (relation in rels) or (relation == \"isA\"):\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\"><b>{relation}</b></td></tr>'\n )\n else:\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\">{relation}</td></tr>'\n )\n label2.append(f'<tr><td port=\"i{index}\"> </td></tr>')\n label1.append(\"</table>>\")\n label2.append(\"</table>>\")\n self.dot.node(\"key1\", label=\"\\n\".join(label1), shape=\"plaintext\")\n self.dot.node(\"key2\", label=\"\\n\".join(label2), shape=\"plaintext\")\n\n rankdir = self.dot.graph_attr.get(\"rankdir\", \"TB\")\n constraint = \"false\" if rankdir in (\"TB\", \"BT\") else \"true\"\n inv = rankdir in (\"BT\",)\n\n for index in range(nrelations):\n relation = (\n relations[nrelations - 1 - index] if inv else relations[index]\n )\n if relation == \"inverse\":\n kwargs = self.style.get(\"inverse\", {}).copy()\n else:\n kwargs = self.get_edge_attrs(relation, {}).copy()\n kwargs[\"constraint\"] = constraint\n with self.dot.subgraph(name=f\"sub{index}\") as subgraph:\n subgraph.attr(rank=\"same\")\n if rankdir in (\"BT\", \"LR\"):\n self.dot.edge(\n f\"key1:i{index}:e\", f\"key2:i{index}:w\", **kwargs\n )\n else:\n self.dot.edge(\n f\"key2:i{index}:w\", f\"key1:i{index}:e\", **kwargs\n )\n\n def get_relations(self, sort=True):\n \"\"\"Returns a set of relations in current graph. If `sort` is true,\n a sorted list is returned.\"\"\"\n relations = set()\n for _, predicate, _ in self.edges:\n if predicate.startswith(\"Inverse\"):\n relations.add(\"inverse\")\n match = re.match(r\"Inverse\\((.+)\\)\", predicate)\n if match is None:\n raise ValueError(\n \"Could unexpectedly not find the inverse relation \"\n f\"just added in: {predicate}\"\n )\n relations.add(match.groups()[0])\n else:\n relations.add(predicate.split(None, 1)[0])\n\n # Sort, but place 'isA' first and 'inverse' last\n if sort:\n start, end = [], []\n if \"isA\" in relations:\n relations.remove(\"isA\")\n start.append(\"isA\")\n if \"inverse\" in relations:\n relations.remove(\"inverse\")\n end.append(\"inverse\")\n relations = start + sorted(relations) + end\n\n return relations\n\n def save(self, filename, fmt=None, **kwargs):\n \"\"\"Saves graph to `filename`. If format is not given, it is\n inferred from `filename`.\"\"\"\n base = os.path.splitext(filename)[0]\n fmt = get_format(filename, default=\"svg\", fmt=fmt)\n kwargs.setdefault(\"cleanup\", True)\n if fmt in (\"graphviz\", \"gv\"):\n if \"dictionary\" in kwargs:\n self.dot.save(filename, dictionary=kwargs[\"dictionary\"])\n else:\n self.dot.save(filename)\n else:\n fmt = kwargs.pop(\"format\", fmt)\n self.dot.render(base, format=fmt, **kwargs)\n\n def view(self):\n \"\"\"Shows the graph in a viewer.\"\"\"\n self.dot.view(cleanup=True)\n\n def get_figsize(self):\n \"\"\"Returns the default figure size (width, height) in points.\"\"\"\n with tempfile.TemporaryDirectory() as tmpdir:\n tmpfile = os.path.join(tmpdir, \"graph.svg\")\n self.save(tmpfile)\n xml = ET.parse(tmpfile)\n svg = xml.getroot()\n width = svg.attrib[\"width\"]\n height = svg.attrib[\"height\"]\n if not width.endswith(\"pt\"):\n # ensure that units are in points\n raise ValueError(\n \"The width attribute should always be given in 'pt', \"\n f\"but it is: {width}\"\n )\n\n def asfloat(string):\n return float(re.match(r\"^[\\d.]+\", string).group())\n\n return asfloat(width), asfloat(height)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_branch","title":"add_branch(self, root, leaves=None, include_leaves=True, strict_leaves=False, exclude=None, relations='isA', edgelabels=None, addnodes=False, addconstructs=False, included_namespaces=(), included_ontologies=(), include_parents='closest', **attrs)
","text":"Adds branch under root
ending at any entity included in the sequence leaves
. If include_leaves
is true, leaf classes are also included.
ontopy/graph.py
def add_branch( # pylint: disable=too-many-arguments,too-many-locals\n self,\n root,\n leaves=None,\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n included_namespaces=(),\n included_ontologies=(),\n include_parents=\"closest\",\n **attrs,\n):\n \"\"\"Adds branch under `root` ending at any entity included in the\n sequence `leaves`. If `include_leaves` is true, leaf classes are\n also included.\"\"\"\n if leaves is None:\n leaves = ()\n classes = self.ontology.get_branch(\n root=root,\n leaves=leaves,\n include_leaves=include_leaves,\n strict_leaves=strict_leaves,\n exclude=exclude,\n )\n\n classes = filter_classes(\n classes,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n nodeattrs = {}\n nodeattrs[get_label(root)] = self.style.get(\"root\", {})\n for leaf in leaves:\n nodeattrs[get_label(leaf)] = self.style.get(\"leaf\", {})\n\n self.add_entities(\n entities=classes,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n closest_ancestors = False\n ancestor_generations = None\n if include_parents == \"closest\":\n closest_ancestors = True\n elif isinstance(include_parents, int):\n ancestor_generations = include_parents\n parents = self.ontology.get_ancestors(\n classes,\n closest=closest_ancestors,\n generations=ancestor_generations,\n strict=True,\n )\n if parents:\n for parent in parents:\n nodeattrs[get_label(parent)] = self.style.get(\"parent_node\", {})\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_class_construct","title":"add_class_construct(self, construct)
","text":"Adds class construct and return its label.
Source code inontopy/graph.py
def add_class_construct(self, construct):\n \"\"\"Adds class construct and return its label.\"\"\"\n self.add_node(construct, **self.style.get(\"class_construct\", {}))\n label = get_label(construct)\n if isinstance(construct, owlready2.Or):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(get_label(cls), \"isA\", label)\n elif isinstance(construct, owlready2.And):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(label, \"isA\", get_label(cls))\n elif isinstance(construct, owlready2.Not):\n clslabel = get_label(construct.Class)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(construct.Class)\n if clslabel in self.nodes:\n self.add_edge(clslabel, \"not\", label)\n # Neither and nor inverse constructs are\n return label\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_edge","title":"add_edge(self, subject, predicate, obj, edgelabel=None, **attrs)
","text":"Add edge corresponding for (subject, predicate, object)
triplet.
ontopy/graph.py
def add_edge(self, subject, predicate, obj, edgelabel=None, **attrs):\n \"\"\"Add edge corresponding for ``(subject, predicate, object)``\n triplet.\"\"\"\n subject = subject if isinstance(subject, str) else get_label(subject)\n predicate = (\n predicate if isinstance(predicate, str) else get_label(predicate)\n )\n obj = obj if isinstance(obj, str) else get_label(obj)\n if subject in self.excluded_nodes or obj in self.excluded_nodes:\n return\n if not isinstance(subject, str) or not isinstance(obj, str):\n raise TypeError(\"`subject` and `object` must be strings\")\n if subject not in self.nodes:\n raise RuntimeError(f'`subject` \"{subject}\" must have been added')\n if obj not in self.nodes:\n raise RuntimeError(f'`object` \"{obj}\" must have been added')\n key = (subject, predicate, obj)\n if key not in self.edges:\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n if (edgelabel is None) and (\n (predicate in rels) or (predicate == \"isA\")\n ):\n edgelabel = self.edgelabels\n label = None\n if edgelabel is None:\n tokens = predicate.split()\n if len(tokens) == 2 and tokens[1] in (\"some\", \"only\"):\n label = f\"{tokens[0]} {tokens[1]}\"\n elif len(tokens) == 3 and tokens[1] in (\n \"exactly\",\n \"min\",\n \"max\",\n ):\n label = f\"{tokens[0]} {tokens[1]} {tokens[2]}\"\n elif isinstance(edgelabel, str):\n label = edgelabel\n elif isinstance(edgelabel, dict):\n label = edgelabel.get(predicate, predicate)\n elif edgelabel:\n label = predicate\n kwargs = self.get_edge_attrs(predicate, attrs=attrs)\n self.dot.edge(subject, obj, label=label, **kwargs)\n self.edges.add(key)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_edges","title":"add_edges(self, sources=None, relations=None, edgelabels=None, addnodes=None, addconstructs=None, **attrs)
","text":"Adds all relations originating from entities sources
who's type are listed in relations
. If sources
is None, edges are added between all current nodes.
ontopy/graph.py
def add_edges( # pylint: disable=too-many-arguments\n self,\n sources=None,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n):\n \"\"\"Adds all relations originating from entities `sources` who's type\n are listed in `relations`. If `sources` is None, edges are added\n between all current nodes.\"\"\"\n if sources is None:\n sources = self.nodes\n for source in sources.copy():\n self.add_source_edges(\n source,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_entities","title":"add_entities(self, entities=None, relations='isA', edgelabels=None, addnodes=False, addconstructs=False, nodeattrs=None, **attrs)
","text":"Adds a sequence of entities to the graph. If entities
is None, all classes are added to the graph.
nodeattrs
is a dict mapping node names to are attributes for dedicated nodes.
ontopy/graph.py
def add_entities( # pylint: disable=too-many-arguments\n self,\n entities=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n nodeattrs=None,\n **attrs,\n):\n \"\"\"Adds a sequence of entities to the graph. If `entities` is None,\n all classes are added to the graph.\n\n `nodeattrs` is a dict mapping node names to are attributes for\n dedicated nodes.\n \"\"\"\n if entities is None:\n entities = self.ontology.classes(imported=self.imported)\n self.add_nodes(entities, nodeattrs=nodeattrs, **attrs)\n self.add_edges(\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_legend","title":"add_legend(self, relations=None)
","text":"Adds legend for specified relations to the graph.
If relations
is \"all\", the legend will contain all relations that are defined in the style. By default the legend will only contain relations that are currently included in the graph.
Hence, you usually want to call add_legend() as the last method before saving or displaying.
Relations with defined style will be bold in legend. Relations that have inherited style from parent relation will not be bold.
Source code inontopy/graph.py
def add_legend(self, relations=None):\n \"\"\"Adds legend for specified relations to the graph.\n\n If `relations` is \"all\", the legend will contain all relations\n that are defined in the style. By default the legend will\n only contain relations that are currently included in the\n graph.\n\n Hence, you usually want to call add_legend() as the last method\n before saving or displaying.\n\n Relations with defined style will be bold in legend.\n Relations that have inherited style from parent relation\n will not be bold.\n \"\"\"\n rels = self.style.get(\"relations\", {})\n if relations is None:\n relations = self.get_relations(sort=True)\n elif relations == \"all\":\n relations = [\"isA\"] + list(rels.keys()) + [\"inverse\"]\n elif isinstance(relations, str):\n relations = relations.split(\",\")\n\n nrelations = len(relations)\n if nrelations == 0:\n return\n\n table = (\n '<<table border=\"0\" cellpadding=\"2\" cellspacing=\"0\" cellborder=\"0\">'\n )\n label1 = [table]\n label2 = [table]\n for index, relation in enumerate(relations):\n if (relation in rels) or (relation == \"isA\"):\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\"><b>{relation}</b></td></tr>'\n )\n else:\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\">{relation}</td></tr>'\n )\n label2.append(f'<tr><td port=\"i{index}\"> </td></tr>')\n label1.append(\"</table>>\")\n label2.append(\"</table>>\")\n self.dot.node(\"key1\", label=\"\\n\".join(label1), shape=\"plaintext\")\n self.dot.node(\"key2\", label=\"\\n\".join(label2), shape=\"plaintext\")\n\n rankdir = self.dot.graph_attr.get(\"rankdir\", \"TB\")\n constraint = \"false\" if rankdir in (\"TB\", \"BT\") else \"true\"\n inv = rankdir in (\"BT\",)\n\n for index in range(nrelations):\n relation = (\n relations[nrelations - 1 - index] if inv else relations[index]\n )\n if relation == \"inverse\":\n kwargs = self.style.get(\"inverse\", {}).copy()\n else:\n kwargs = self.get_edge_attrs(relation, {}).copy()\n kwargs[\"constraint\"] = constraint\n with self.dot.subgraph(name=f\"sub{index}\") as subgraph:\n subgraph.attr(rank=\"same\")\n if rankdir in (\"BT\", \"LR\"):\n self.dot.edge(\n f\"key1:i{index}:e\", f\"key2:i{index}:w\", **kwargs\n )\n else:\n self.dot.edge(\n f\"key2:i{index}:w\", f\"key1:i{index}:e\", **kwargs\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_missing_node","title":"add_missing_node(self, name, addnodes=None)
","text":"Checks if name
corresponds to a missing node and add it if addnodes
is true.
Returns true if the node exists or is added, false otherwise.
Source code inontopy/graph.py
def add_missing_node(self, name, addnodes=None):\n \"\"\"Checks if `name` corresponds to a missing node and add it if\n `addnodes` is true.\n\n Returns true if the node exists or is added, false otherwise.\"\"\"\n addnodes = self.addnodes if addnodes is None else addnodes\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes:\n if addnodes:\n self.add_node(entity, **self.style.get(\"added_node\", {}))\n else:\n return False\n return True\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_node","title":"add_node(self, name, nodeattrs=None, **attrs)
","text":"Add node with given name. attrs
are graphviz node attributes.
ontopy/graph.py
def add_node(self, name, nodeattrs=None, **attrs):\n \"\"\"Add node with given name. `attrs` are graphviz node attributes.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes.union(self.excluded_nodes):\n kwargs = self.get_node_attrs(\n entity, nodeattrs=nodeattrs, attrs=attrs\n )\n if hasattr(entity, \"iri\"):\n kwargs.setdefault(\"URL\", entity.iri)\n self.dot.node(label, label=label, **kwargs)\n self.nodes.add(label)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_nodes","title":"add_nodes(self, names, nodeattrs, **attrs)
","text":"Add nodes with given names. attrs
are graphviz node attributes.
ontopy/graph.py
def add_nodes(self, names, nodeattrs, **attrs):\n \"\"\"Add nodes with given names. `attrs` are graphviz node attributes.\"\"\"\n for name in names:\n self.add_node(name, nodeattrs=nodeattrs, **attrs)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_parents","title":"add_parents(self, name, levels=1, relations='isA', edgelabels=None, addnodes=False, addconstructs=False, **attrs)
","text":"Add levels
levels of strict parents of entity name
.
ontopy/graph.py
def add_parents( # pylint: disable=too-many-arguments\n self,\n name,\n levels=1,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n **attrs,\n):\n \"\"\"Add `levels` levels of strict parents of entity `name`.\"\"\"\n\n def addparents(entity, nodes, parents):\n if nodes > 0:\n for parent in entity.get_parents(strict=True):\n parents.add(parent)\n addparents(parent, nodes - 1, parents)\n\n entity = self.ontology[name] if isinstance(name, str) else name\n parents = set()\n addparents(entity, levels, parents)\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_source_edges","title":"add_source_edges(self, source, relations=None, edgelabels=None, addnodes=None, addconstructs=None, **attrs)
","text":"Adds all relations originating from entity source
who's type are listed in relations
.
ontopy/graph.py
def add_source_edges( # pylint: disable=too-many-arguments,too-many-branches\n self,\n source,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n):\n \"\"\"Adds all relations originating from entity `source` who's type\n are listed in `relations`.\"\"\"\n if relations is None:\n relations = self.relations\n elif isinstance(relations, str):\n relations = set([relations])\n else:\n relations = set(relations)\n\n edgelabels = self.edgelabels if edgelabels is None else edgelabels\n addconstructs = (\n self.addconstructs if addconstructs is None else addconstructs\n )\n\n entity = self.ontology[source] if isinstance(source, str) else source\n label = get_label(entity)\n for relation in entity.is_a:\n # isA\n if isinstance(\n relation, (owlready2.ThingClass, owlready2.ObjectPropertyClass)\n ):\n if \"all\" in relations or \"isA\" in relations:\n rlabel = get_label(relation)\n # FIXME - we actually want to include individuals...\n if isinstance(entity, owlready2.Thing):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n self.add_edge(\n subject=label,\n predicate=\"isA\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n\n # restriction\n elif isinstance(relation, owlready2.Restriction):\n rname = get_label(relation.property)\n if \"all\" in relations or rname in relations:\n rlabel = f\"{rname} {typenames[relation.type]}\"\n if isinstance(relation.value, owlready2.ThingClass):\n obj = get_label(relation.value)\n if not self.add_missing_node(relation.value, addnodes):\n continue\n elif (\n isinstance(relation.value, owlready2.ClassConstruct)\n and self.addconstructs\n ):\n obj = self.add_class_construct(relation.value)\n else:\n continue\n pred = asstring(\n relation, exclude_object=True, ontology=self.ontology\n )\n self.add_edge(\n label, pred, obj, edgelabel=edgelabels, **attrs\n )\n\n # inverse\n if isinstance(relation, owlready2.Inverse):\n if \"all\" in relations or \"inverse\" in relations:\n rlabel = get_label(relation)\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n self.add_edge(\n subject=label,\n predicate=\"inverse\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_edge_attrs","title":"get_edge_attrs(self, predicate, attrs)
","text":"Returns attributes for node or edge predicate
. attrs
overrides the default style.
Parameters:
Name Type Description Defaultpredicate
str
predicate to get attributes for
requiredattrs
dict
desired attributes to override default
required Source code inontopy/graph.py
def get_edge_attrs(self, predicate: str, attrs: dict) -> dict:\n \"\"\"Returns attributes for node or edge `predicate`. `attrs` overrides\n the default style.\n\n Parameters:\n predicate: predicate to get attributes for\n attrs: desired attributes to override default\n \"\"\"\n # given type\n types = (\"isA\", \"equivalent_to\", \"disjoint_with\", \"inverse_of\")\n if predicate in types:\n kwargs = self.style.get(predicate, {}).copy()\n else:\n kwargs = {}\n name = predicate.split(None, 1)[0]\n match = re.match(r\"Inverse\\((.*)\\)\", name)\n if match:\n (name,) = match.groups()\n attrs = attrs.copy()\n for key, value in self.style.get(\"inverse\", {}).items():\n attrs.setdefault(key, value)\n if not isinstance(name, str) or name in self.ontology:\n entity = self.ontology[name] if isinstance(name, str) else name\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n rattrs = self._relation_styles(entity, relations, rels)\n\n # object property\n if isinstance(\n entity,\n (owlready2.ObjectPropertyClass, owlready2.ObjectProperty),\n ):\n kwargs = self.style.get(\"default_relation\", {}).copy()\n kwargs.update(rattrs)\n # data property\n elif isinstance(\n entity,\n (owlready2.DataPropertyClass, owlready2.DataProperty),\n ):\n kwargs = self.style.get(\"default_dataprop\", {}).copy()\n kwargs.update(rattrs)\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs.update(self.style.get(\"edges\", {}).get(predicate, {}))\n kwargs.update(attrs)\n return kwargs\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_figsize","title":"get_figsize(self)
","text":"Returns the default figure size (width, height) in points.
Source code inontopy/graph.py
def get_figsize(self):\n \"\"\"Returns the default figure size (width, height) in points.\"\"\"\n with tempfile.TemporaryDirectory() as tmpdir:\n tmpfile = os.path.join(tmpdir, \"graph.svg\")\n self.save(tmpfile)\n xml = ET.parse(tmpfile)\n svg = xml.getroot()\n width = svg.attrib[\"width\"]\n height = svg.attrib[\"height\"]\n if not width.endswith(\"pt\"):\n # ensure that units are in points\n raise ValueError(\n \"The width attribute should always be given in 'pt', \"\n f\"but it is: {width}\"\n )\n\n def asfloat(string):\n return float(re.match(r\"^[\\d.]+\", string).group())\n\n return asfloat(width), asfloat(height)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_node_attrs","title":"get_node_attrs(self, name, nodeattrs, attrs)
","text":"Returns attributes for node or edge name
. attrs
overrides the default style.
ontopy/graph.py
def get_node_attrs(self, name, nodeattrs, attrs):\n \"\"\"Returns attributes for node or edge `name`. `attrs` overrides\n the default style.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n # class\n if isinstance(entity, owlready2.ThingClass):\n if entity.is_defined:\n kwargs = self.style.get(\"defined_class\", {})\n else:\n kwargs = self.style.get(\"class\", {})\n # class construct\n elif isinstance(entity, owlready2.ClassConstruct):\n kwargs = self.style.get(\"class_construct\", {})\n # individual\n elif isinstance(entity, owlready2.Thing):\n kwargs = self.style.get(\"individual\", {})\n # object property\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n kwargs = self.style.get(\"object_property\", {})\n # data property\n elif isinstance(entity, owlready2.DataPropertyClass):\n kwargs = self.style.get(\"data_property\", {})\n # annotation property\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n kwargs = self.style.get(\"annotation_property\", {})\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs = kwargs.copy()\n kwargs.update(self.style.get(\"nodes\", {}).get(label, {}))\n if nodeattrs:\n kwargs.update(nodeattrs.get(label, {}))\n kwargs.update(attrs)\n return kwargs\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_relations","title":"get_relations(self, sort=True)
","text":"Returns a set of relations in current graph. If sort
is true, a sorted list is returned.
ontopy/graph.py
def get_relations(self, sort=True):\n \"\"\"Returns a set of relations in current graph. If `sort` is true,\n a sorted list is returned.\"\"\"\n relations = set()\n for _, predicate, _ in self.edges:\n if predicate.startswith(\"Inverse\"):\n relations.add(\"inverse\")\n match = re.match(r\"Inverse\\((.+)\\)\", predicate)\n if match is None:\n raise ValueError(\n \"Could unexpectedly not find the inverse relation \"\n f\"just added in: {predicate}\"\n )\n relations.add(match.groups()[0])\n else:\n relations.add(predicate.split(None, 1)[0])\n\n # Sort, but place 'isA' first and 'inverse' last\n if sort:\n start, end = [], []\n if \"isA\" in relations:\n relations.remove(\"isA\")\n start.append(\"isA\")\n if \"inverse\" in relations:\n relations.remove(\"inverse\")\n end.append(\"inverse\")\n relations = start + sorted(relations) + end\n\n return relations\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.save","title":"save(self, filename, fmt=None, **kwargs)
","text":"Saves graph to filename
. If format is not given, it is inferred from filename
.
ontopy/graph.py
def save(self, filename, fmt=None, **kwargs):\n \"\"\"Saves graph to `filename`. If format is not given, it is\n inferred from `filename`.\"\"\"\n base = os.path.splitext(filename)[0]\n fmt = get_format(filename, default=\"svg\", fmt=fmt)\n kwargs.setdefault(\"cleanup\", True)\n if fmt in (\"graphviz\", \"gv\"):\n if \"dictionary\" in kwargs:\n self.dot.save(filename, dictionary=kwargs[\"dictionary\"])\n else:\n self.dot.save(filename)\n else:\n fmt = kwargs.pop(\"format\", fmt)\n self.dot.render(base, format=fmt, **kwargs)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.view","title":"view(self)
","text":"Shows the graph in a viewer.
Source code inontopy/graph.py
def view(self):\n \"\"\"Shows the graph in a viewer.\"\"\"\n self.dot.view(cleanup=True)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.check_module_dependencies","title":"check_module_dependencies(modules, verbose=True)
","text":"Check module dependencies and return a copy of modules with redundant dependencies removed.
If verbose
is true, warnings are printed for each module that
If modules
is given, it should be a dict returned by get_module_dependencies().
ontopy/graph.py
def check_module_dependencies(modules, verbose=True):\n \"\"\"Check module dependencies and return a copy of modules with\n redundant dependencies removed.\n\n If `verbose` is true, warnings are printed for each module that\n\n If `modules` is given, it should be a dict returned by\n get_module_dependencies().\n \"\"\"\n visited = set()\n\n def get_deps(iri, excl=None):\n \"\"\"Returns a set with all dependencies of `iri`, excluding `excl` and\n its dependencies.\"\"\"\n if iri in visited:\n return set()\n visited.add(iri)\n deps = set()\n for dependency in modules[iri]:\n if dependency != excl:\n deps.add(dependency)\n deps.update(get_deps(dependency))\n return deps\n\n mods = {}\n redundant = []\n for iri, deps in modules.items():\n if not deps:\n mods[iri] = set()\n for dep in deps:\n if dep in get_deps(iri, dep):\n redundant.append((iri, dep))\n elif iri in mods:\n mods[iri].add(dep)\n else:\n mods[iri] = set([dep])\n\n if redundant and verbose:\n print(\"** Warning: Redundant module dependency:\")\n for iri, dep in redundant:\n print(f\"{iri} -> {dep}\")\n\n return mods\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.cytoscape_style","title":"cytoscape_style(style=None)
","text":"Get list of color, style and fills.
Source code inontopy/graph.py
def cytoscape_style(style=None): # pylint: disable=too-many-branches\n \"\"\"Get list of color, style and fills.\"\"\"\n if not style:\n style = _default_style\n colours = {}\n styles = {}\n fill = {}\n for key, value in style.items():\n if isinstance(value, dict):\n if \"color\" in value:\n colours[key] = value[\"color\"]\n else:\n colours[key] = \"black\"\n if \"style\" in value:\n styles[key] = value[\"style\"]\n else:\n styles[key] = \"solid\"\n if \"arrowhead\" in value:\n if value[\"arrowhead\"] == \"empty\":\n fill[key] = \"hollow\"\n else:\n fill[key] = \"filled\"\n\n for key, value in style.get(\"relations\", {}).items():\n if isinstance(value, dict):\n if \"color\" in value:\n colours[key] = value[\"color\"]\n else:\n colours[key] = \"black\"\n if \"style\" in value:\n styles[key] = value[\"style\"]\n else:\n styles[key] = \"solid\"\n if \"arrowhead\" in value:\n if value[\"arrowhead\"] == \"empty\":\n fill[key] = \"hollow\"\n else:\n fill[key] = \"filled\"\n return [colours, styles, fill]\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.cytoscapegraph","title":"cytoscapegraph(graph, onto=None, infobox=None, force=False)
","text":"Returns and instance of icytoscape-figure for an instance Graph of OntoGraph, the accompanying ontology is required for mouse actions.
Parameters:
Name Type Description Defaultgraph
OntoGraph
graph generated with OntoGraph with edgelabels=True.
requiredonto
Optional[ontopy.ontology.Ontology]
ontology to be used for mouse actions.
None
infobox
str
\"left\" or \"right\". Placement of infbox with respect to graph.
None
force
bool
force generate graph without correct edgelabels.
False
Returns:
Type DescriptionGridspecLayout
cytoscapewidget with graph and infobox to be visualized in jupyter lab.
Source code inontopy/graph.py
def cytoscapegraph(\n graph: OntoGraph,\n onto: Optional[Ontology] = None,\n infobox: str = None,\n force: bool = False,\n) -> \"GridspecLayout\":\n # pylint: disable=too-many-locals,too-many-statements\n \"\"\"Returns and instance of icytoscape-figure for an\n instance Graph of OntoGraph, the accompanying ontology\n is required for mouse actions.\n Args:\n graph: graph generated with OntoGraph with edgelabels=True.\n onto: ontology to be used for mouse actions.\n infobox: \"left\" or \"right\". Placement of infbox with\n respect to graph.\n force: force generate graph without correct edgelabels.\n Returns:\n cytoscapewidget with graph and infobox to be visualized\n in jupyter lab.\n\n \"\"\"\n # pylint: disable=import-error,import-outside-toplevel\n from ipywidgets import Output, VBox, GridspecLayout\n from IPython.display import display, Image\n from pathlib import Path\n import networkx as nx\n import pydotplus\n import ipycytoscape\n from networkx.readwrite.json_graph import cytoscape_data\n\n # Define the styles, this has to be aligned with the graphviz values\n dotplus = pydotplus.graph_from_dot_data(graph.dot.source)\n # if graph doesn't have multiedges, use dotplus.set_strict(true)\n pydot_graph = nx.nx_pydot.from_pydot(dotplus)\n\n colours, styles, fill = cytoscape_style()\n\n data = cytoscape_data(pydot_graph)[\"elements\"]\n for datum in data[\"edges\"]:\n try:\n datum[\"data\"][\"label\"] = (\n datum[\"data\"][\"label\"].rsplit(\" \", 1)[0].lstrip('\"')\n )\n except KeyError as err:\n if not force:\n raise EMMOntoPyException(\n \"Edge label is not defined. Are you sure that the OntoGraph\"\n \"instance you provided was generated with \"\n \"\u00b4edgelabels=True\u00b4?\"\n ) from err\n warnings.warn(\n \"ARROWS WILL NOT BE DISPLAYED CORRECTLY. \"\n \"Edge label is not defined. Are you sure that the OntoGraph \"\n \"instance you provided was generated with \u00b4edgelabels=True\u00b4?\"\n )\n datum[\"data\"][\"label\"] = \"\"\n\n lab = datum[\"data\"][\"label\"].replace(\"Inverse(\", \"\").rstrip(\")\")\n try:\n datum[\"data\"][\"colour\"] = colours[lab]\n except KeyError:\n datum[\"data\"][\"colour\"] = \"black\"\n try:\n datum[\"data\"][\"style\"] = styles[lab]\n except KeyError:\n datum[\"data\"][\"style\"] = \"solid\"\n if datum[\"data\"][\"label\"].startswith(\"Inverse(\"):\n datum[\"data\"][\"targetarrow\"] = \"diamond\"\n datum[\"data\"][\"sourcearrow\"] = \"none\"\n else:\n datum[\"data\"][\"targetarrow\"] = \"triangle\"\n datum[\"data\"][\"sourcearrow\"] = \"none\"\n try:\n datum[\"data\"][\"fill\"] = fill[lab]\n except KeyError:\n datum[\"data\"][\"fill\"] = \"filled\"\n\n cytofig = ipycytoscape.CytoscapeWidget()\n cytofig.graph.add_graph_from_json(data, directed=True)\n\n cytofig.set_style(\n [\n {\n \"selector\": \"node\",\n \"css\": {\n \"content\": \"data(label)\",\n # \"text-valign\": \"center\",\n # \"color\": \"white\",\n # \"text-outline-width\": 2,\n # \"text-outline-color\": \"red\",\n \"background-color\": \"blue\",\n },\n },\n {\"selector\": \"node:parent\", \"css\": {\"background-opacity\": 0.333}},\n {\n \"selector\": \"edge\",\n \"style\": {\n \"width\": 2,\n \"line-color\": \"data(colour)\",\n # \"content\": \"data(label)\"\",\n \"line-style\": \"data(style)\",\n },\n },\n {\n \"selector\": \"edge.directed\",\n \"style\": {\n \"curve-style\": \"bezier\",\n \"target-arrow-shape\": \"data(targetarrow)\",\n \"target-arrow-color\": \"data(colour)\",\n \"target-arrow-fill\": \"data(fill)\",\n \"mid-source-arrow-shape\": \"data(sourcearrow)\",\n \"mid-source-arrow-color\": \"data(colour)\",\n },\n },\n {\n \"selector\": \"edge.multiple_edges\",\n \"style\": {\"curve-style\": \"bezier\"},\n },\n {\n \"selector\": \":selected\",\n \"css\": {\n \"background-color\": \"black\",\n \"line-color\": \"black\",\n \"target-arrow-color\": \"black\",\n \"source-arrow-color\": \"black\",\n \"text-outline-color\": \"black\",\n },\n },\n ]\n )\n\n if onto is not None:\n out = Output(layout={\"border\": \"1px solid black\"})\n\n def log_clicks(node):\n with out:\n print((onto.get_by_label(node[\"data\"][\"label\"])))\n parent = onto.get_by_label(node[\"data\"][\"label\"]).get_parents()\n print(f\"parents: {parent}\")\n try:\n elucidation = onto.get_by_label(\n node[\"data\"][\"label\"]\n ).elucidation\n print(f\"elucidation: {elucidation[0]}\")\n except (AttributeError, IndexError):\n pass\n\n try:\n annotations = onto.get_by_label(\n node[\"data\"][\"label\"]\n ).annotations\n for _ in annotations:\n print(f\"annotation: {_}\")\n except AttributeError:\n pass\n\n # Try does not work...\n try:\n iri = onto.get_by_label(node[\"data\"][\"label\"]).iri\n print(f\"iri: {iri}\")\n except (AttributeError, IndexError):\n pass\n try:\n fig = node[\"data\"][\"label\"]\n if os.path.exists(Path(fig + \".png\")):\n display(Image(fig + \".png\", width=100))\n elif os.path.exists(Path(fig + \".jpg\")):\n display(Image(fig + \".jpg\", width=100))\n except (AttributeError, IndexError):\n pass\n out.clear_output(wait=True)\n\n def log_mouseovers(node):\n with out:\n print(onto.get_by_label(node[\"data\"][\"label\"]))\n # print(f'mouseover: {pformat(node)}')\n out.clear_output(wait=True)\n\n cytofig.on(\"node\", \"click\", log_clicks)\n cytofig.on(\"node\", \"mouseover\", log_mouseovers) # , remove=True)\n cytofig.on(\"node\", \"mouseout\", out.clear_output(wait=True))\n grid = GridspecLayout(1, 3, height=\"400px\")\n if infobox == \"left\":\n grid[0, 0] = out\n grid[0, 1:] = cytofig\n elif infobox == \"right\":\n grid[0, 0:-1] = cytofig\n grid[0, 2] = out\n else:\n return VBox([cytofig, out])\n return grid\n\n return cytofig\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.filter_classes","title":"filter_classes(classes, included_namespaces=(), included_ontologies=())
","text":"Filter out classes whos namespace is not in included_namespaces
or whos ontology name is not in one of the ontologies in included_ontologies
.
classes
should be a sequence of classes.
ontopy/graph.py
def filter_classes(classes, included_namespaces=(), included_ontologies=()):\n \"\"\"Filter out classes whos namespace is not in `included_namespaces`\n or whos ontology name is not in one of the ontologies in\n `included_ontologies`.\n\n `classes` should be a sequence of classes.\n \"\"\"\n filtered = set(classes)\n if included_namespaces:\n filtered = set(\n c for c in filtered if c.namespace.name in included_namespaces\n )\n if included_ontologies:\n filtered = set(\n c\n for c in filtered\n if c.namespace.ontology.name in included_ontologies\n )\n return filtered\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.get_module_dependencies","title":"get_module_dependencies(iri_or_onto, strip_base=None)
","text":"Reads iri_or_onto
and returns a dict mapping ontology names to a list of ontologies that they depends on.
If strip_base
is true, the base IRI is stripped from ontology names. If it is a string, it lstrip'ped from the base iri.
ontopy/graph.py
def get_module_dependencies(iri_or_onto, strip_base=None):\n \"\"\"Reads `iri_or_onto` and returns a dict mapping ontology names to a\n list of ontologies that they depends on.\n\n If `strip_base` is true, the base IRI is stripped from ontology\n names. If it is a string, it lstrip'ped from the base iri.\n \"\"\"\n from ontopy.ontology import ( # pylint: disable=import-outside-toplevel\n get_ontology,\n )\n\n if isinstance(iri_or_onto, str):\n onto = get_ontology(iri_or_onto)\n onto.load()\n else:\n onto = iri_or_onto\n\n modules = {onto.base_iri: set()}\n\n def strip(base_iri):\n if isinstance(strip_base, str):\n return base_iri.lstrip(strip_base)\n if strip_base:\n return base_iri.strip(onto.base_iri)\n return base_iri\n\n visited = set()\n\n def setmodules(onto):\n for imported_onto in onto.imported_ontologies:\n if onto.base_iri in modules:\n modules[strip(onto.base_iri)].add(strip(imported_onto.base_iri))\n else:\n modules[strip(onto.base_iri)] = set(\n [strip(imported_onto.base_iri)]\n )\n if imported_onto.base_iri not in modules:\n modules[strip(imported_onto.base_iri)] = set()\n if imported_onto not in visited:\n visited.add(imported_onto)\n setmodules(imported_onto)\n\n setmodules(onto)\n return modules\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.plot_modules","title":"plot_modules(src, filename=None, fmt=None, show=False, strip_base=None, ignore_redundant=True)
","text":"Plot module dependency graph for src
and return a graph object.
Here src
may be an IRI, a path the the ontology or a dict returned by get_module_dependencies().
If filename
is given, write the graph to this file.
If fmt
is None, the output format is inferred from filename
.
If show
is true, the graph is displayed.
strip_base
is passed on to get_module_dependencies() if src
is not a dict.
If ignore_redundant
is true, redundant dependencies are not plotted.
ontopy/graph.py
def plot_modules( # pylint: disable=too-many-arguments\n src,\n filename=None,\n fmt=None,\n show=False,\n strip_base=None,\n ignore_redundant=True,\n):\n \"\"\"Plot module dependency graph for `src` and return a graph object.\n\n Here `src` may be an IRI, a path the the ontology or a dict returned by\n get_module_dependencies().\n\n If `filename` is given, write the graph to this file.\n\n If `fmt` is None, the output format is inferred from `filename`.\n\n If `show` is true, the graph is displayed.\n\n `strip_base` is passed on to get_module_dependencies() if `src` is not\n a dict.\n\n If `ignore_redundant` is true, redundant dependencies are not plotted.\n \"\"\"\n if isinstance(src, dict):\n modules = src\n else:\n modules = get_module_dependencies(src, strip_base=strip_base)\n\n if ignore_redundant:\n modules = check_module_dependencies(modules, verbose=False)\n\n dot = graphviz.Digraph(comment=\"Module dependencies\")\n dot.attr(rankdir=\"TB\")\n dot.node_attr.update(\n style=\"filled\", fillcolor=\"lightblue\", shape=\"box\", edgecolor=\"blue\"\n )\n dot.edge_attr.update(arrowtail=\"open\", dir=\"back\")\n\n for iri in modules.keys():\n iriname = iri.split(\":\", 1)[1]\n dot.node(iriname, label=iri, URL=iri)\n\n for iri, deps in modules.items():\n for dep in deps:\n iriname = iri.split(\":\", 1)[1]\n depname = dep.split(\":\", 1)[1]\n dot.edge(depname, iriname)\n\n if filename:\n base, ext = os.path.splitext(filename)\n if fmt is None:\n fmt = ext.lstrip(\".\")\n dot.render(base, format=fmt, view=False, cleanup=True)\n\n if show:\n dot.view(cleanup=True)\n\n return dot\n
"},{"location":"api_reference/ontopy/manchester/","title":"manchester","text":"Evaluate Manchester syntax
This module compiles restrictions and logical constructs in Manchester syntax into Owlready2 classes. The main function in this module is manchester.evaluate()
, see its docstring for usage example.
Pyparsing is used under the hood for parsing.
"},{"location":"api_reference/ontopy/manchester/#ontopy.manchester.ManchesterError","title":" ManchesterError (EMMOntoPyException)
","text":"Raised on invalid Manchester notation.
Source code inontopy/manchester.py
class ManchesterError(EMMOntoPyException):\n \"\"\"Raised on invalid Manchester notation.\"\"\"\n
"},{"location":"api_reference/ontopy/manchester/#ontopy.manchester.evaluate","title":"evaluate(ontology, expr)
","text":"Evaluate expression in Manchester syntax.
Parameters:
Name Type Description Defaultontology
Ontology
The ontology within which the expression will be evaluated.
requiredexpr
str
Manchester expression to be evaluated.
requiredReturns:
Type DescriptionConstruct
An Owlready2 construct that corresponds to the expression.
Examples:
from ontopy.manchester import evaluate from ontopy import get_ontology emmo = get_ontology().load()
restriction = evaluate(emmo, 'hasPart some Atom') cls = evaluate(emmo, 'Atom') expr = evaluate(emmo, 'Atom or Molecule')
Note
Logical expressions (with not
, and
and or
) are supported as well as object property restrictions. For data properterties are only value restrictions supported so far.
ontopy/manchester.py
def evaluate(ontology: owlready2.Ontology, expr: str) -> owlready2.Construct:\n \"\"\"Evaluate expression in Manchester syntax.\n\n Args:\n ontology: The ontology within which the expression will be evaluated.\n expr: Manchester expression to be evaluated.\n\n Returns:\n An Owlready2 construct that corresponds to the expression.\n\n Example:\n >>> from ontopy.manchester import evaluate\n >>> from ontopy import get_ontology\n >>> emmo = get_ontology().load()\n\n >>> restriction = evaluate(emmo, 'hasPart some Atom')\n >>> cls = evaluate(emmo, 'Atom')\n >>> expr = evaluate(emmo, 'Atom or Molecule')\n\n Note:\n Logical expressions (with `not`, `and` and `or`) are supported as\n well as object property restrictions. For data properterties are\n only value restrictions supported so far.\n \"\"\"\n\n # pylint: disable=invalid-name\n def _parse_literal(r):\n \"\"\"Compiles literal to Owlready2 type.\"\"\"\n if r.language:\n v = owlready2.locstr(r.string, r.language)\n elif r.number:\n v = r.number\n else:\n v = r.string\n return v\n\n # pylint: disable=invalid-name,no-else-return,too-many-return-statements\n # pylint: disable=too-many-branches\n def _eval(r):\n \"\"\"Recursively evaluate expression produced by pyparsing into an\n Owlready2 construct.\"\"\"\n\n def fneg(x):\n \"\"\"Negates the argument if `neg` is true.\"\"\"\n return owlready2.Not(x) if neg else x\n\n if isinstance(r, str): # r is atomic, returns its owlready2 repr\n return ontology[r]\n neg = False # whether the expression starts with \"not\"\n while r[0] == \"not\":\n r.pop(0) # strip off the \"not\" and proceed\n neg = not neg\n\n if len(r) == 1: # r is either a atomic or a parenthesised\n # subexpression that should be further evaluated\n if isinstance(r[0], str):\n return fneg(ontology[r[0]])\n else:\n return fneg(_eval(r[0]))\n elif r.op: # r contains a logical operator: and/or\n ops = {\"and\": owlready2.And, \"or\": owlready2.Or}\n op = ops[r.op]\n if len(r) == 3:\n return op([fneg(_eval(r[0])), _eval(r[2])])\n else:\n arg1 = fneg(_eval(r[0]))\n r.pop(0)\n r.pop(0)\n return op([arg1, _eval(r)])\n elif r.objProp: # r is a restriction\n if r[0] == \"inverse\":\n r.pop(0)\n prop = owlready2.Inverse(ontology[r[0]])\n else:\n prop = ontology[r[0]]\n rtype = r[1]\n if rtype == \"Self\":\n return fneg(prop.has_self())\n r.pop(0)\n r.pop(0)\n f = getattr(prop, rtype)\n if rtype == \"value\":\n return fneg(f(_eval(r)))\n elif rtype in (\"some\", \"only\"):\n return fneg(f(_eval(r)))\n elif rtype in (\"min\", \"max\", \"exactly\"):\n cardinality = r.pop(0)\n return fneg(f(cardinality, _eval(r)))\n else:\n raise ManchesterError(f\"invalid restriction type: {rtype}\")\n elif r.dataProp: # r is a data property restriction\n prop = ontology[r[0]]\n rtype = r[1]\n r.pop(0)\n r.pop(0)\n f = getattr(prop, rtype)\n if rtype == \"value\":\n return f(_parse_literal(r))\n else:\n raise ManchesterError(\n f\"unimplemented data property restriction: \"\n f\"{prop} {rtype} {r}\"\n )\n else:\n raise ManchesterError(f\"invalid expression: {r}\")\n\n grammar = manchester_expression()\n return _eval(grammar.parseString(expr, parseAll=True))\n
"},{"location":"api_reference/ontopy/manchester/#ontopy.manchester.manchester_expression","title":"manchester_expression()
","text":"Returns pyparsing grammar for a Manchester expression.
This function is mostly for internal use.
See also: https://www.w3.org/TR/owl2-manchester-syntax/
Source code inontopy/manchester.py
def manchester_expression():\n \"\"\"Returns pyparsing grammar for a Manchester expression.\n\n This function is mostly for internal use.\n\n See also: https://www.w3.org/TR/owl2-manchester-syntax/\n \"\"\"\n # pylint: disable=global-statement,invalid-name,too-many-locals\n global GRAMMAR\n if GRAMMAR:\n return GRAMMAR\n\n # Subset of the Manchester grammar for expressions\n # It is based on https://www.w3.org/TR/owl2-manchester-syntax/\n # but allows logical constructs within restrictions (like Protege)\n ident = pp.Word(pp.alphas + \"_:-\", pp.alphanums + \"_:-\", asKeyword=True)\n uint = pp.Word(pp.nums)\n alphas = pp.Word(pp.alphas)\n string = pp.Word(pp.alphanums + \":\")\n quotedString = (\n pp.QuotedString('\"\"\"', multiline=True) | pp.QuotedString('\"')\n )(\"string\")\n typedLiteral = pp.Combine(quotedString + \"^^\" + string(\"datatype\"))\n stringLanguageLiteral = pp.Combine(quotedString + \"@\" + alphas(\"language\"))\n stringLiteral = quotedString\n numberLiteral = pp.pyparsing_common.number(\"number\")\n literal = (\n typedLiteral | stringLanguageLiteral | stringLiteral | numberLiteral\n )\n logOp = pp.one_of([\"and\", \"or\"], asKeyword=True)\n expr = pp.Forward()\n restriction = pp.Forward()\n primary = pp.Keyword(\"not\")[...] + (\n restriction | ident(\"cls\") | pp.nested_expr(\"(\", \")\", expr)\n )\n objPropExpr = (\n pp.Literal(\"inverse\")\n + pp.Suppress(\"(\")\n + ident(\"objProp\")\n + pp.Suppress(\")\")\n | pp.Literal(\"inverse\") + ident(\"objProp\")\n | ident(\"objProp\")\n )\n dataPropExpr = ident(\"dataProp\")\n restriction <<= (\n objPropExpr + pp.Keyword(\"some\") + expr\n | objPropExpr + pp.Keyword(\"only\") + expr\n | objPropExpr + pp.Keyword(\"Self\")\n | objPropExpr + pp.Keyword(\"value\") + ident(\"individual\")\n | objPropExpr + pp.Keyword(\"min\") + uint + expr\n | objPropExpr + pp.Keyword(\"max\") + uint + expr\n | objPropExpr + pp.Keyword(\"exactly\") + uint + expr\n | dataPropExpr + pp.Keyword(\"value\") + literal\n )\n expr <<= primary + (logOp(\"op\") + expr)[...]\n\n GRAMMAR = expr\n return expr\n
"},{"location":"api_reference/ontopy/nadict/","title":"nadict","text":"A nested dict with both attribute and item access.
NA stands for Nested and Attribute.
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict","title":" NADict
","text":"A nested dict with both attribute and item access.
It is intended to be used with keys that are valid Python identifiers. However, except for string keys containing a dot, there are actually no hard limitations. If a key equals an existing attribute name, attribute access is of cause not possible.
Nested items can be accessed via a dot notation, as shown in the example below.
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict--examples","title":"Examples","text":"n = NADict(a=1, b=NADict(c=3, d=4)) n['a'] 1 n.a 1 n['b.c'] 3 n.b.c 3 n['b.e'] = 5 n.b.e 5
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict--attributes","title":"Attributes","text":"_dict : dict Dictionary holding the actial items.
Source code inontopy/nadict.py
class NADict:\n \"\"\"A nested dict with both attribute and item access.\n\n It is intended to be used with keys that are valid Python\n identifiers. However, except for string keys containing a dot,\n there are actually no hard limitations. If a key equals an existing\n attribute name, attribute access is of cause not possible.\n\n Nested items can be accessed via a dot notation, as shown in the\n example below.\n\n Examples\n --------\n >>> n = NADict(a=1, b=NADict(c=3, d=4))\n >>> n['a']\n 1\n >>> n.a\n 1\n >>> n['b.c']\n 3\n >>> n.b.c\n 3\n >>> n['b.e'] = 5\n >>> n.b.e\n 5\n\n Attributes\n ----------\n _dict : dict\n Dictionary holding the actial items.\n \"\"\"\n\n def __init__(self, *args, **kw):\n object.__setattr__(self, \"_dict\", {})\n self.update(*args, **kw)\n\n def __getitem__(self, key):\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1][key2]\n return self._dict[key]\n\n def __setitem__(self, key, value):\n if key in (\n \"clear\",\n \"copy\",\n \"fromkeys\",\n \"get\",\n \"items\",\n \"keys\",\n \"pop\",\n \"popitem\",\n \"setdefault\",\n \"update\",\n \"values\",\n ):\n raise ValueError(\n f\"invalid key {key!r}: must not override supported dict method\"\n \" names\"\n )\n\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n if key1 not in self._dict:\n self._dict[key1] = NADict()\n self._dict[key1][key2] = value\n elif key in self._dict:\n if isinstance(self._dict[key], NADict):\n self._dict[key].update(value)\n else:\n self._dict[key] = value\n else:\n if isinstance(value, Mapping):\n self._dict[key] = NADict(value)\n else:\n self._dict[key] = value\n\n def __delitem__(self, key):\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n del self._dict[key1][key2]\n else:\n del self._dict[key]\n\n def __getattr__(self, key):\n if key not in self._dict:\n raise AttributeError(f\"No such key: {key}\")\n return self._dict[key]\n\n def __setattr__(self, key, value):\n if key in self._dict:\n self._dict[key] = value\n else:\n object.__setattr__(self, key, value)\n\n def __delattr__(self, key):\n if key in self._dict:\n del self._dict[key]\n else:\n object.__delattr__(self, key)\n\n def __len__(self):\n return len(self._dict)\n\n def __contains__(self, key):\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return key2 in self._dict[key1]\n return key in self._dict\n\n def __iter__(self, prefix=\"\"):\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.__iter__(key)\n else:\n yield key\n\n def __repr__(self):\n return (\n f\"{self.__class__.__name__}(\"\n f\"{', '.join(f'{key}={value!r}' for key, value in self._dict.items())})\" # pylint: disable=line-too-long\n )\n\n def clear(self):\n \"\"\"Clear all keys.\"\"\"\n self._dict.clear()\n\n def copy(self):\n \"\"\"Returns a deep copy of self.\"\"\"\n return copy.deepcopy(self)\n\n @staticmethod\n def fromkeys(iterable, value=None):\n \"\"\"Returns a new NADict with keys from `iterable` and values\n set to `value`.\"\"\"\n res = NADict()\n for key in iterable:\n res[key] = value\n return res\n\n def get(self, key, default=None):\n \"\"\"Returns the value for `key` if `key` is in self, else return\n `default`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].get(key2, default)\n return self._dict.get(key, default)\n\n def items(self, prefix=\"\"):\n \"\"\"Returns an iterator over all items as (key, value) pairs.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.items(key)\n else:\n yield (key, value)\n\n def keys(self, prefix=\"\"):\n \"\"\"Returns an iterator over all keys.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.keys(key)\n else:\n yield key\n\n def pop(self, key, default=None):\n \"\"\"Removed `key` and returns corresponding value. If `key` is not\n found, `default` is returned if given, otherwise KeyError is\n raised.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].pop(key2, default)\n return self._dict.pop(key, default)\n\n def popitem(self, prefix=\"\"):\n \"\"\"Removes and returns some (key, value). Raises KeyError if empty.\"\"\"\n item = self._dict.popitem()\n if isinstance(item, NADict):\n key, value = item\n item2 = item.popitem(key)\n self._dict[key] = value\n return item2\n key, value = self._dict.popitem()\n key = f\"{prefix}.{key}\" if prefix else key\n return (key, value)\n\n def setdefault(self, key, value=None):\n \"\"\"Inserts `key` and `value` pair if key is not found.\n\n Returns the new value for `key`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].setdefault(key2, value)\n return self._dict.setdefault(key, value)\n\n def update(self, *args, **kwargs):\n \"\"\"Updates self with dict/iterable from `args` and keyword arguments\n from `kw`.\"\"\"\n for arg in args:\n if hasattr(arg, \"keys\"):\n for _ in arg:\n self[_] = arg[_]\n else:\n for key, value in arg:\n self[key] = value\n for key, value in kwargs.items():\n self[key] = value\n\n def values(self):\n \"\"\"Returns a set-like providing a view of all style values.\"\"\"\n return self._dict.values()\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.clear","title":"clear(self)
","text":"Clear all keys.
Source code inontopy/nadict.py
def clear(self):\n \"\"\"Clear all keys.\"\"\"\n self._dict.clear()\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.copy","title":"copy(self)
","text":"Returns a deep copy of self.
Source code inontopy/nadict.py
def copy(self):\n \"\"\"Returns a deep copy of self.\"\"\"\n return copy.deepcopy(self)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.fromkeys","title":"fromkeys(iterable, value=None)
staticmethod
","text":"Returns a new NADict with keys from iterable
and values set to value
.
ontopy/nadict.py
@staticmethod\ndef fromkeys(iterable, value=None):\n \"\"\"Returns a new NADict with keys from `iterable` and values\n set to `value`.\"\"\"\n res = NADict()\n for key in iterable:\n res[key] = value\n return res\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.get","title":"get(self, key, default=None)
","text":"Returns the value for key
if key
is in self, else return default
.
ontopy/nadict.py
def get(self, key, default=None):\n \"\"\"Returns the value for `key` if `key` is in self, else return\n `default`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].get(key2, default)\n return self._dict.get(key, default)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.items","title":"items(self, prefix='')
","text":"Returns an iterator over all items as (key, value) pairs.
Source code inontopy/nadict.py
def items(self, prefix=\"\"):\n \"\"\"Returns an iterator over all items as (key, value) pairs.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.items(key)\n else:\n yield (key, value)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.keys","title":"keys(self, prefix='')
","text":"Returns an iterator over all keys.
Source code inontopy/nadict.py
def keys(self, prefix=\"\"):\n \"\"\"Returns an iterator over all keys.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.keys(key)\n else:\n yield key\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.pop","title":"pop(self, key, default=None)
","text":"Removed key
and returns corresponding value. If key
is not found, default
is returned if given, otherwise KeyError is raised.
ontopy/nadict.py
def pop(self, key, default=None):\n \"\"\"Removed `key` and returns corresponding value. If `key` is not\n found, `default` is returned if given, otherwise KeyError is\n raised.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].pop(key2, default)\n return self._dict.pop(key, default)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.popitem","title":"popitem(self, prefix='')
","text":"Removes and returns some (key, value). Raises KeyError if empty.
Source code inontopy/nadict.py
def popitem(self, prefix=\"\"):\n \"\"\"Removes and returns some (key, value). Raises KeyError if empty.\"\"\"\n item = self._dict.popitem()\n if isinstance(item, NADict):\n key, value = item\n item2 = item.popitem(key)\n self._dict[key] = value\n return item2\n key, value = self._dict.popitem()\n key = f\"{prefix}.{key}\" if prefix else key\n return (key, value)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.setdefault","title":"setdefault(self, key, value=None)
","text":"Inserts key
and value
pair if key is not found.
Returns the new value for key
.
ontopy/nadict.py
def setdefault(self, key, value=None):\n \"\"\"Inserts `key` and `value` pair if key is not found.\n\n Returns the new value for `key`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].setdefault(key2, value)\n return self._dict.setdefault(key, value)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.update","title":"update(self, *args, **kwargs)
","text":"Updates self with dict/iterable from args
and keyword arguments from kw
.
ontopy/nadict.py
def update(self, *args, **kwargs):\n \"\"\"Updates self with dict/iterable from `args` and keyword arguments\n from `kw`.\"\"\"\n for arg in args:\n if hasattr(arg, \"keys\"):\n for _ in arg:\n self[_] = arg[_]\n else:\n for key, value in arg:\n self[key] = value\n for key, value in kwargs.items():\n self[key] = value\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.values","title":"values(self)
","text":"Returns a set-like providing a view of all style values.
Source code inontopy/nadict.py
def values(self):\n \"\"\"Returns a set-like providing a view of all style values.\"\"\"\n return self._dict.values()\n
"},{"location":"api_reference/ontopy/ontodoc/","title":"ontodoc","text":"A module for documenting ontologies.
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.AttributeDict","title":" AttributeDict (dict)
","text":"A dict with attribute access.
Note that methods like key() and update() may be overridden.
Source code inontopy/ontodoc.py
class AttributeDict(dict):\n \"\"\"A dict with attribute access.\n\n Note that methods like key() and update() may be overridden.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.__dict__ = self\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP","title":" DocPP
","text":"Documentation pre-processor.
It supports the following features:
Comment lines
%% Comment line...\n
Insert header with given level
%HEADER label [level=1]\n
Insert figure with optional caption and width. filepath
should be relative to basedir
. If width is 0, no width will be specified.
%FIGURE filepath [caption='' width=0px]\n
Include other markdown files. Header levels may be up or down with shift
%INCLUDE filepath [shift=0]\n
Insert generated documentation for ontology entity. The header level may be set with header_level
.
%ENTITY name [header_level=3]\n
Insert generated documentation for ontology branch name
. Options:
include_leaves: Whether to include leaves as end points to the branch.
%BRANCH name [header_level=3 terminated=1 include_leaves=0 namespaces='' ontologies='']
Insert generated figure of ontology branch name
. The figure is written to path
. The default path is figdir
/name
, where figdir
is given at class initiation. It is recommended to exclude the file extension from path
. In this case, the default figformat will be used (and easily adjusted to the correct format required by the backend). leaves
may be a comma- separated list of leaf node names.
%BRANCHFIG name [path='' caption='' terminated=1 include_leaves=1\n strict_leaves=1, width=0px leaves='' relations=all\n edgelabels=0 namespaces='' ontologies='']\n
This is a combination of the %HEADER and %BRANCHFIG directives.
%BRANCHHEAD name [level=2 path='' caption='' terminated=1\n include_leaves=1 width=0px leaves='']\n
This is a combination of the %HEADER, %BRANCHFIG and %BRANCH directives. It inserts documentation of branch name
, with a header followed by a figure and then documentation of each element.
%BRANCHDOC name [level=2 path='' title='' caption='' terminated=1\n strict_leaves=1 width=0px leaves='' relations='all'\n rankdir='BT' legend=1 namespaces='' ontologies='']\n
Insert generated documentation for all entities of the given type. Valid values of type
are: \"classes\", \"individuals\", \"object_properties\", \"data_properties\", \"annotations_properties\"
%ALL type [header_level=3, namespaces='', ontologies='']\n
Insert generated figure of all entities of the given type. Valid values of type
are: \"classes\", \"object_properties\" and \"data_properties\".
%ALLFIG type\n
template : str Input template. ontodoc : OntoDoc instance Instance of OntoDoc basedir : str Base directory for including relative file paths. figdir : str Default directory to store generated figures. figformat : str Default format for generated figures. figscale : float Default scaling of generated figures. maxwidth : float Maximum figure width. Figures larger than this will be rescaled. imported : bool Whether to include imported entities.
Source code inontopy/ontodoc.py
class DocPP: # pylint: disable=too-many-instance-attributes\n \"\"\"Documentation pre-processor.\n\n It supports the following features:\n\n * Comment lines\n\n %% Comment line...\n\n * Insert header with given level\n\n %HEADER label [level=1]\n\n * Insert figure with optional caption and width. `filepath`\n should be relative to `basedir`. If width is 0, no width will\n be specified.\n\n %FIGURE filepath [caption='' width=0px]\n\n * Include other markdown files. Header levels may be up or down with\n `shift`\n\n %INCLUDE filepath [shift=0]\n\n * Insert generated documentation for ontology entity. The header\n level may be set with `header_level`.\n\n %ENTITY name [header_level=3]\n\n * Insert generated documentation for ontology branch `name`. Options:\n - header_level: Header level.\n - terminated: Whether to branch should be terminated at all branch\n names in the final document.\n - include_leaves: Whether to include leaves as end points\n to the branch.\n\n %BRANCH name [header_level=3 terminated=1 include_leaves=0\n namespaces='' ontologies='']\n\n * Insert generated figure of ontology branch `name`. The figure\n is written to `path`. The default path is `figdir`/`name`,\n where `figdir` is given at class initiation. It is recommended\n to exclude the file extension from `path`. In this case, the\n default figformat will be used (and easily adjusted to the\n correct format required by the backend). `leaves` may be a comma-\n separated list of leaf node names.\n\n %BRANCHFIG name [path='' caption='' terminated=1 include_leaves=1\n strict_leaves=1, width=0px leaves='' relations=all\n edgelabels=0 namespaces='' ontologies='']\n\n * This is a combination of the %HEADER and %BRANCHFIG directives.\n\n %BRANCHHEAD name [level=2 path='' caption='' terminated=1\n include_leaves=1 width=0px leaves='']\n\n * This is a combination of the %HEADER, %BRANCHFIG and %BRANCH\n directives. It inserts documentation of branch `name`, with a\n header followed by a figure and then documentation of each\n element.\n\n %BRANCHDOC name [level=2 path='' title='' caption='' terminated=1\n strict_leaves=1 width=0px leaves='' relations='all'\n rankdir='BT' legend=1 namespaces='' ontologies='']\n\n * Insert generated documentation for all entities of the given type.\n Valid values of `type` are: \"classes\", \"individuals\",\n \"object_properties\", \"data_properties\", \"annotations_properties\"\n\n %ALL type [header_level=3, namespaces='', ontologies='']\n\n * Insert generated figure of all entities of the given type.\n Valid values of `type` are: \"classes\", \"object_properties\" and\n \"data_properties\".\n\n %ALLFIG type\n\n Parameters\n ----------\n template : str\n Input template.\n ontodoc : OntoDoc instance\n Instance of OntoDoc\n basedir : str\n Base directory for including relative file paths.\n figdir : str\n Default directory to store generated figures.\n figformat : str\n Default format for generated figures.\n figscale : float\n Default scaling of generated figures.\n maxwidth : float\n Maximum figure width. Figures larger than this will be rescaled.\n imported : bool\n Whether to include imported entities.\n \"\"\"\n\n # FIXME - this class should be refractured:\n # * Instead of rescan the entire document for each pre-processer\n # directive, we should scan the source like by line and handle\n # each directive as they occour.\n # * The current implementation has a lot of dublicated code.\n # * Instead of modifying the source in-place, we should copy to a\n # result list. This will make good error reporting much easier.\n # * Branch leaves are only looked up in the file witht the %BRANCH\n # directive, not in all included files as expedted.\n\n def __init__( # pylint: disable=too-many-arguments\n self,\n template,\n ontodoc,\n basedir=\".\",\n figdir=\"genfigs\",\n figformat=\"png\",\n figscale=1.0,\n maxwidth=None,\n imported=False,\n ):\n self.lines = template.split(\"\\n\")\n self.ontodoc = ontodoc\n self.basedir = basedir\n self.figdir = os.path.join(basedir, figdir)\n self.figformat = figformat\n self.figscale = figscale\n self.maxwidth = maxwidth\n self.imported = imported\n self._branch_cache = None\n self._processed = False # Whether process() has been called\n\n def __str__(self):\n return self.get_buffer()\n\n def get_buffer(self):\n \"\"\"Returns the current buffer.\"\"\"\n return \"\\n\".join(self.lines)\n\n def copy(self):\n \"\"\"Returns a copy of self.\"\"\"\n docpp = DocPP(\n \"\",\n self.ontodoc,\n self.basedir,\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.lines[:] = self.lines\n docpp.figdir = self.figdir\n return docpp\n\n def get_branches(self):\n \"\"\"Returns a list with all branch names as specified with %BRANCH\n (in current and all included documents). The returned value is\n cached for efficiency purposes and so that it is not lost after\n processing branches.\"\"\"\n if self._branch_cache is None:\n names = []\n docpp = self.copy()\n docpp.process_includes()\n for line in docpp.lines:\n if line.startswith(\"%BRANCH\"):\n names.append(shlex.split(line)[1])\n self._branch_cache = names\n return self._branch_cache\n\n def shift_header_levels(self, shift):\n \"\"\"Shift header level of all hashtag-headers in buffer. Underline\n headers are ignored.\"\"\"\n if not shift:\n return\n pat = re.compile(\"^#+ \")\n for i, line in enumerate(self.lines):\n match = pat.match(line)\n if match:\n if shift > 0:\n self.lines[i] = \"#\" * shift + line\n elif shift < 0:\n counter = match.end()\n if shift > counter:\n self.lines[i] = line.lstrip(\"# \")\n else:\n self.lines[i] = line[counter:]\n\n def process_comments(self):\n \"\"\"Strips out comment lines starting with \"%%\".\"\"\"\n self.lines = [line for line in self.lines if not line.startswith(\"%%\")]\n\n def process_headers(self):\n \"\"\"Expand all %HEADER specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%HEADER \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], level=1)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_header(\n name, int(opts.level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_figures(self):\n \"\"\"Expand all %FIGURE specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%FIGURE \"):\n tokens = shlex.split(line)\n path = tokens[1]\n opts = get_options(tokens[2:], caption=\"\", width=0)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n os.path.join(self.basedir, path),\n caption=opts.caption, # pylint: disable=no-member\n width=opts.width, # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_entities(self):\n \"\"\"Expand all %ENTITY specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ENTITY \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemdoc(\n name, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_branches(self):\n \"\"\"Expand all %BRANCH specifications.\"\"\"\n onto = self.ontodoc.onto\n\n # Get all branch names in final document\n names = self.get_branches()\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCH \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n header_level=3,\n terminated=1,\n include_leaves=0,\n namespaces=\"\",\n ontologies=\"\",\n )\n leaves = (\n names if opts.terminated else ()\n ) # pylint: disable=no-member\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n branch = filter_classes(\n onto.get_branch(\n name, leaves, opts.include_leaves\n ), # pylint: disable=no-member\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n branch, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def _make_branchfig( # pylint: disable=too-many-arguments,too-many-locals\n self,\n name: str,\n path: \"Union[Path, str]\",\n terminated: bool,\n include_leaves: bool,\n strict_leaves: bool,\n width: float,\n leaves: \"Union[str, list[str]]\",\n relations: str,\n edgelabels: str,\n rankdir: str,\n legend: bool,\n included_namespaces: \"Iterable[str]\",\n included_ontologies: \"Iterable[str]\",\n ) -> \"tuple[str, list[str], float]\":\n \"\"\"Help method for process_branchfig().\n\n Args:\n name: name of branch root\n path: optional figure path name\n include_leaves: whether to include leaves as end points\n to the branch.\n strict_leaves: whether to strictly exclude leave descendants\n terminated: whether the graph should be terminated at leaf nodes\n width: optional figure width\n leaves: optional leaf node names for graph termination\n relations: comma-separated list of relations to include\n edgelabels: whether to include edgelabels\n rankdir: graph direction (BT, TB, RL, LR)\n legend: whether to add legend\n included_namespaces: sequence of names of namespaces to be included\n included_ontologies: sequence of names of ontologies to be included\n\n Returns:\n filepath: path to generated figure\n leaves: used list of leaf node names\n width: actual figure width\n\n \"\"\"\n onto = self.ontodoc.onto\n if leaves:\n if isinstance(leaves, str):\n leaves = leaves.split(\",\")\n elif terminated:\n leaves = set(self.get_branches())\n leaves.discard(name)\n else:\n leaves = None\n if path:\n figdir = os.path.dirname(path)\n formatext = os.path.splitext(path)[1]\n if formatext:\n fmt = formatext.lstrip(\".\")\n else:\n fmt = self.figformat\n path += f\".{fmt}\"\n else:\n figdir = self.figdir\n fmt = self.figformat\n term = \"T\" if terminated else \"\"\n path = os.path.join(figdir, name + term) + f\".{fmt}\"\n\n # Create graph\n graph = OntoGraph(onto, graph_attr={\"rankdir\": rankdir})\n graph.add_branch(\n root=name,\n leaves=leaves,\n include_leaves=include_leaves,\n strict_leaves=strict_leaves,\n relations=relations,\n edgelabels=edgelabels,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n if legend:\n graph.add_legend()\n\n if not width:\n figwidth, _ = graph.get_figsize()\n width = self.figscale * figwidth\n if self.maxwidth and width > self.maxwidth:\n width = self.maxwidth\n\n filepath = os.path.join(self.basedir, path)\n destdir = os.path.dirname(filepath)\n if not os.path.exists(destdir):\n os.makedirs(destdir)\n graph.save(filepath, fmt=fmt)\n return filepath, leaves, width\n\n def process_branchfigs(self):\n \"\"\"Process all %BRANCHFIG directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHFIG \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n caption=\"\",\n terminated=1,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_branchdocs(self): # pylint: disable=too-many-locals\n \"\"\"Process all %BRANCHDOC and %BRANCHEAD directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHDOC \") or line.startswith(\n \"%BRANCHHEAD \"\n ):\n with_branch = bool(line.startswith(\"%BRANCHDOC \"))\n tokens = shlex.split(line)\n name = tokens[1]\n title = camelsplit(name)\n title = title[0].upper() + title[1:] + \" branch\"\n opts = get_options(\n tokens[2:],\n level=2,\n path=\"\",\n title=title,\n caption=title + \".\",\n terminated=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n include_leaves = 1\n filepath, leaves, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n include_leaves,\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n sec = []\n sec.append(\n self.ontodoc.get_header(opts.title, int(opts.level))\n ) # pylint: disable=no-member\n sec.append(\n self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n )\n )\n if with_branch:\n include_leaves = 0\n branch = filter_classes(\n onto.get_branch(name, leaves, include_leaves),\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n sec.append(\n self.ontodoc.itemsdoc(\n branch, int(opts.level + 1)\n ) # pylint: disable=no-member\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n\n def process_alls(self):\n \"\"\"Expand all %ALL specifications.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALL \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n if token == \"classes\": # nosec\n items = onto.classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n items = onto.object_properties(imported=self.imported)\n elif token == \"data_properties\": # nosec\n items = onto.data_properties(imported=self.imported)\n elif token == \"annotation_properties\": # nosec\n items = onto.annotation_properties(imported=self.imported)\n elif token == \"individuals\": # nosec\n items = onto.individuals(imported=self.imported)\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALL: {token}\"\n )\n items = sorted(items, key=get_label)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n items, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_allfig(self): # pylint: disable=too-many-locals\n \"\"\"Process all %ALLFIG directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALLFIG \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n level=3,\n terminated=0,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"isA\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n if token == \"classes\": # nosec\n roots = onto.get_root_classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n roots = onto.get_root_object_properties(\n imported=self.imported\n )\n elif token == \"data_properties\": # nosec\n roots = onto.get_root_data_properties(\n imported=self.imported\n )\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALLFIG: {token}\"\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n sec = []\n for root in roots:\n name = asstring(root, link=\"{label}\", ontology=onto)\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n title = f\"Taxonomy of {name}.\"\n sec.append(\n self.ontodoc.get_header(title, int(opts.level))\n ) # pylint: disable=no-member\n sec.extend(\n self.ontodoc.get_figure(\n filepath, caption=title, width=width\n ).split(\"\\n\")\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n\n def process_includes(self):\n \"\"\"Process all %INCLUDE directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%INCLUDE \"):\n tokens = shlex.split(line)\n filepath = tokens[1]\n opts = get_options(tokens[2:], shift=0)\n with open(\n os.path.join(self.basedir, filepath), \"rt\", encoding=\"utf8\"\n ) as handle:\n docpp = DocPP(\n handle.read(),\n self.ontodoc,\n basedir=os.path.dirname(filepath),\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.figdir = self.figdir\n if opts.shift: # pylint: disable=no-member\n docpp.shift_header_levels(\n int(opts.shift)\n ) # pylint: disable=no-member\n docpp.process()\n del self.lines[i]\n self.lines[i:i] = docpp.lines\n\n def process(self):\n \"\"\"Perform all pre-processing steps.\"\"\"\n if not self._processed:\n self.process_comments()\n self.process_headers()\n self.process_figures()\n self.process_entities()\n self.process_branches()\n self.process_branchfigs()\n self.process_branchdocs()\n self.process_alls()\n self.process_allfig()\n self.process_includes()\n self._processed = True\n\n def write( # pylint: disable=too-many-arguments\n self,\n outfile,\n fmt=None,\n pandoc_option_files=(),\n pandoc_options=(),\n genfile=None,\n verbose=True,\n ):\n \"\"\"Writes documentation to `outfile`.\n\n Parameters\n ----------\n outfile : str\n File that the documentation is written to.\n fmt : str\n Output format. If it is \"md\" or \"simple-html\",\n the built-in template generator is used. Otherwise\n pandoc is used. If not given, the format is inferred\n from the `outfile` name extension.\n pandoc_option_files : sequence\n Sequence with command line arguments provided to pandoc.\n pandoc_options : sequence\n Additional pandoc options overriding options read from\n `pandoc_option_files`.\n genfile : str\n Store temporary generated markdown input file to pandoc\n to this file (for debugging).\n verbose : bool\n Whether to show some messages when running pandoc.\n \"\"\"\n self.process()\n content = self.get_buffer()\n\n substitutions = self.ontodoc.style.get(\"substitutions\", [])\n for reg, sub in substitutions:\n content = re.sub(reg, sub, content)\n\n fmt = get_format(outfile, default=\"html\", fmt=fmt)\n if fmt not in (\"simple-html\", \"markdown\", \"md\"): # Run pandoc\n if not genfile:\n with NamedTemporaryFile(mode=\"w+t\", suffix=\".md\") as temp_file:\n temp_file.write(content)\n temp_file.flush()\n genfile = temp_file.name\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n with open(genfile, \"wt\") as handle:\n handle.write(content)\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n if verbose:\n print(\"Writing:\", outfile)\n with open(outfile, \"wt\") as handle:\n handle.write(content)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.copy","title":"copy(self)
","text":"Returns a copy of self.
Source code inontopy/ontodoc.py
def copy(self):\n \"\"\"Returns a copy of self.\"\"\"\n docpp = DocPP(\n \"\",\n self.ontodoc,\n self.basedir,\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.lines[:] = self.lines\n docpp.figdir = self.figdir\n return docpp\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.get_branches","title":"get_branches(self)
","text":"Returns a list with all branch names as specified with %BRANCH (in current and all included documents). The returned value is cached for efficiency purposes and so that it is not lost after processing branches.
Source code inontopy/ontodoc.py
def get_branches(self):\n \"\"\"Returns a list with all branch names as specified with %BRANCH\n (in current and all included documents). The returned value is\n cached for efficiency purposes and so that it is not lost after\n processing branches.\"\"\"\n if self._branch_cache is None:\n names = []\n docpp = self.copy()\n docpp.process_includes()\n for line in docpp.lines:\n if line.startswith(\"%BRANCH\"):\n names.append(shlex.split(line)[1])\n self._branch_cache = names\n return self._branch_cache\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.get_buffer","title":"get_buffer(self)
","text":"Returns the current buffer.
Source code inontopy/ontodoc.py
def get_buffer(self):\n \"\"\"Returns the current buffer.\"\"\"\n return \"\\n\".join(self.lines)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process","title":"process(self)
","text":"Perform all pre-processing steps.
Source code inontopy/ontodoc.py
def process(self):\n \"\"\"Perform all pre-processing steps.\"\"\"\n if not self._processed:\n self.process_comments()\n self.process_headers()\n self.process_figures()\n self.process_entities()\n self.process_branches()\n self.process_branchfigs()\n self.process_branchdocs()\n self.process_alls()\n self.process_allfig()\n self.process_includes()\n self._processed = True\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_allfig","title":"process_allfig(self)
","text":"Process all %ALLFIG directives.
Source code inontopy/ontodoc.py
def process_allfig(self): # pylint: disable=too-many-locals\n \"\"\"Process all %ALLFIG directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALLFIG \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n level=3,\n terminated=0,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"isA\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n if token == \"classes\": # nosec\n roots = onto.get_root_classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n roots = onto.get_root_object_properties(\n imported=self.imported\n )\n elif token == \"data_properties\": # nosec\n roots = onto.get_root_data_properties(\n imported=self.imported\n )\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALLFIG: {token}\"\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n sec = []\n for root in roots:\n name = asstring(root, link=\"{label}\", ontology=onto)\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n title = f\"Taxonomy of {name}.\"\n sec.append(\n self.ontodoc.get_header(title, int(opts.level))\n ) # pylint: disable=no-member\n sec.extend(\n self.ontodoc.get_figure(\n filepath, caption=title, width=width\n ).split(\"\\n\")\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_alls","title":"process_alls(self)
","text":"Expand all %ALL specifications.
Source code inontopy/ontodoc.py
def process_alls(self):\n \"\"\"Expand all %ALL specifications.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALL \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n if token == \"classes\": # nosec\n items = onto.classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n items = onto.object_properties(imported=self.imported)\n elif token == \"data_properties\": # nosec\n items = onto.data_properties(imported=self.imported)\n elif token == \"annotation_properties\": # nosec\n items = onto.annotation_properties(imported=self.imported)\n elif token == \"individuals\": # nosec\n items = onto.individuals(imported=self.imported)\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALL: {token}\"\n )\n items = sorted(items, key=get_label)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n items, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_branchdocs","title":"process_branchdocs(self)
","text":"Process all %BRANCHDOC and %BRANCHEAD directives.
Source code inontopy/ontodoc.py
def process_branchdocs(self): # pylint: disable=too-many-locals\n \"\"\"Process all %BRANCHDOC and %BRANCHEAD directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHDOC \") or line.startswith(\n \"%BRANCHHEAD \"\n ):\n with_branch = bool(line.startswith(\"%BRANCHDOC \"))\n tokens = shlex.split(line)\n name = tokens[1]\n title = camelsplit(name)\n title = title[0].upper() + title[1:] + \" branch\"\n opts = get_options(\n tokens[2:],\n level=2,\n path=\"\",\n title=title,\n caption=title + \".\",\n terminated=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n include_leaves = 1\n filepath, leaves, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n include_leaves,\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n sec = []\n sec.append(\n self.ontodoc.get_header(opts.title, int(opts.level))\n ) # pylint: disable=no-member\n sec.append(\n self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n )\n )\n if with_branch:\n include_leaves = 0\n branch = filter_classes(\n onto.get_branch(name, leaves, include_leaves),\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n sec.append(\n self.ontodoc.itemsdoc(\n branch, int(opts.level + 1)\n ) # pylint: disable=no-member\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_branches","title":"process_branches(self)
","text":"Expand all %BRANCH specifications.
Source code inontopy/ontodoc.py
def process_branches(self):\n \"\"\"Expand all %BRANCH specifications.\"\"\"\n onto = self.ontodoc.onto\n\n # Get all branch names in final document\n names = self.get_branches()\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCH \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n header_level=3,\n terminated=1,\n include_leaves=0,\n namespaces=\"\",\n ontologies=\"\",\n )\n leaves = (\n names if opts.terminated else ()\n ) # pylint: disable=no-member\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n branch = filter_classes(\n onto.get_branch(\n name, leaves, opts.include_leaves\n ), # pylint: disable=no-member\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n branch, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_branchfigs","title":"process_branchfigs(self)
","text":"Process all %BRANCHFIG directives.
Source code inontopy/ontodoc.py
def process_branchfigs(self):\n \"\"\"Process all %BRANCHFIG directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHFIG \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n caption=\"\",\n terminated=1,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_comments","title":"process_comments(self)
","text":"Strips out comment lines starting with \"%%\".
Source code inontopy/ontodoc.py
def process_comments(self):\n \"\"\"Strips out comment lines starting with \"%%\".\"\"\"\n self.lines = [line for line in self.lines if not line.startswith(\"%%\")]\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_entities","title":"process_entities(self)
","text":"Expand all %ENTITY specifications.
Source code inontopy/ontodoc.py
def process_entities(self):\n \"\"\"Expand all %ENTITY specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ENTITY \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemdoc(\n name, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_figures","title":"process_figures(self)
","text":"Expand all %FIGURE specifications.
Source code inontopy/ontodoc.py
def process_figures(self):\n \"\"\"Expand all %FIGURE specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%FIGURE \"):\n tokens = shlex.split(line)\n path = tokens[1]\n opts = get_options(tokens[2:], caption=\"\", width=0)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n os.path.join(self.basedir, path),\n caption=opts.caption, # pylint: disable=no-member\n width=opts.width, # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_headers","title":"process_headers(self)
","text":"Expand all %HEADER specifications.
Source code inontopy/ontodoc.py
def process_headers(self):\n \"\"\"Expand all %HEADER specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%HEADER \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], level=1)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_header(\n name, int(opts.level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_includes","title":"process_includes(self)
","text":"Process all %INCLUDE directives.
Source code inontopy/ontodoc.py
def process_includes(self):\n \"\"\"Process all %INCLUDE directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%INCLUDE \"):\n tokens = shlex.split(line)\n filepath = tokens[1]\n opts = get_options(tokens[2:], shift=0)\n with open(\n os.path.join(self.basedir, filepath), \"rt\", encoding=\"utf8\"\n ) as handle:\n docpp = DocPP(\n handle.read(),\n self.ontodoc,\n basedir=os.path.dirname(filepath),\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.figdir = self.figdir\n if opts.shift: # pylint: disable=no-member\n docpp.shift_header_levels(\n int(opts.shift)\n ) # pylint: disable=no-member\n docpp.process()\n del self.lines[i]\n self.lines[i:i] = docpp.lines\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.shift_header_levels","title":"shift_header_levels(self, shift)
","text":"Shift header level of all hashtag-headers in buffer. Underline headers are ignored.
Source code inontopy/ontodoc.py
def shift_header_levels(self, shift):\n \"\"\"Shift header level of all hashtag-headers in buffer. Underline\n headers are ignored.\"\"\"\n if not shift:\n return\n pat = re.compile(\"^#+ \")\n for i, line in enumerate(self.lines):\n match = pat.match(line)\n if match:\n if shift > 0:\n self.lines[i] = \"#\" * shift + line\n elif shift < 0:\n counter = match.end()\n if shift > counter:\n self.lines[i] = line.lstrip(\"# \")\n else:\n self.lines[i] = line[counter:]\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.write","title":"write(self, outfile, fmt=None, pandoc_option_files=(), pandoc_options=(), genfile=None, verbose=True)
","text":"Writes documentation to outfile
.
outfile : str File that the documentation is written to. fmt : str Output format. If it is \"md\" or \"simple-html\", the built-in template generator is used. Otherwise pandoc is used. If not given, the format is inferred from the outfile
name extension. pandoc_option_files : sequence Sequence with command line arguments provided to pandoc. pandoc_options : sequence Additional pandoc options overriding options read from pandoc_option_files
. genfile : str Store temporary generated markdown input file to pandoc to this file (for debugging). verbose : bool Whether to show some messages when running pandoc.
ontopy/ontodoc.py
def write( # pylint: disable=too-many-arguments\n self,\n outfile,\n fmt=None,\n pandoc_option_files=(),\n pandoc_options=(),\n genfile=None,\n verbose=True,\n):\n \"\"\"Writes documentation to `outfile`.\n\n Parameters\n ----------\n outfile : str\n File that the documentation is written to.\n fmt : str\n Output format. If it is \"md\" or \"simple-html\",\n the built-in template generator is used. Otherwise\n pandoc is used. If not given, the format is inferred\n from the `outfile` name extension.\n pandoc_option_files : sequence\n Sequence with command line arguments provided to pandoc.\n pandoc_options : sequence\n Additional pandoc options overriding options read from\n `pandoc_option_files`.\n genfile : str\n Store temporary generated markdown input file to pandoc\n to this file (for debugging).\n verbose : bool\n Whether to show some messages when running pandoc.\n \"\"\"\n self.process()\n content = self.get_buffer()\n\n substitutions = self.ontodoc.style.get(\"substitutions\", [])\n for reg, sub in substitutions:\n content = re.sub(reg, sub, content)\n\n fmt = get_format(outfile, default=\"html\", fmt=fmt)\n if fmt not in (\"simple-html\", \"markdown\", \"md\"): # Run pandoc\n if not genfile:\n with NamedTemporaryFile(mode=\"w+t\", suffix=\".md\") as temp_file:\n temp_file.write(content)\n temp_file.flush()\n genfile = temp_file.name\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n with open(genfile, \"wt\") as handle:\n handle.write(content)\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n if verbose:\n print(\"Writing:\", outfile)\n with open(outfile, \"wt\") as handle:\n handle.write(content)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.InvalidTemplateError","title":" InvalidTemplateError (NameError)
","text":"Raised on errors in template files.
Source code inontopy/ontodoc.py
class InvalidTemplateError(NameError):\n \"\"\"Raised on errors in template files.\"\"\"\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc","title":" OntoDoc
","text":"A class for helping documentating ontologies.
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc--parameters","title":"Parameters","text":"onto : Ontology instance The ontology that should be documented. style : dict | \"html\" | \"markdown\" | \"markdown_tex\" A dict defining the following template strings (and substitutions):
:header: Formats an header.\n Substitutions: {level}, {label}\n:link: Formats a link.\n Substitutions: {name}\n:point: Formats a point (list item).\n Substitutions: {point}, {ontology}\n:points: Formats a list of points. Used within annotations.\n Substitutions: {points}, {ontology}\n:annotation: Formats an annotation.\n Substitutions: {key}, {value}, {ontology}\n:substitutions: list of ``(regex, sub)`` pairs for substituting\n annotation values.\n
Source code in ontopy/ontodoc.py
class OntoDoc:\n \"\"\"A class for helping documentating ontologies.\n\n Parameters\n ----------\n onto : Ontology instance\n The ontology that should be documented.\n style : dict | \"html\" | \"markdown\" | \"markdown_tex\"\n A dict defining the following template strings (and substitutions):\n\n :header: Formats an header.\n Substitutions: {level}, {label}\n :link: Formats a link.\n Substitutions: {name}\n :point: Formats a point (list item).\n Substitutions: {point}, {ontology}\n :points: Formats a list of points. Used within annotations.\n Substitutions: {points}, {ontology}\n :annotation: Formats an annotation.\n Substitutions: {key}, {value}, {ontology}\n :substitutions: list of ``(regex, sub)`` pairs for substituting\n annotation values.\n \"\"\"\n\n _markdown_style = {\n \"sep\": \"\\n\",\n \"figwidth\": \"{{ width={width:.0f}px }}\",\n \"figure\": \"![{caption}]({path}){figwidth}\\n\",\n \"header\": \"\\n{:#<{level}} {label} {{#{anchor}}}\",\n # Use ref instead of iri for local references in links\n \"link\": \"[{label}]({ref})\",\n \"point\": \" - {point}\\n\",\n \"points\": \"\\n\\n{points}\\n\",\n \"annotation\": \"**{key}:** {value}\\n\",\n \"substitutions\": [],\n }\n # Extra style settings for markdown+tex (e.g. pdf generation with pandoc)\n _markdown_tex_extra_style = {\n \"substitutions\": [\n # logic/math symbols\n (\"\\u2200\", r\"$\\\\forall$\"),\n (\"\\u2203\", r\"$\\\\exists$\"),\n (\"\\u2206\", r\"$\\\\nabla$\"),\n (\"\\u2227\", r\"$\\\\land$\"),\n (\"\\u2228\", r\"$\\\\lor$\"),\n (\"\\u2207\", r\"$\\\\nabla$\"),\n (\"\\u2212\", r\"-\"),\n (\"->\", r\"$\\\\rightarrow$\"),\n # uppercase greek letters\n (\"\\u0391\", r\"$\\\\Upalpha$\"),\n (\"\\u0392\", r\"$\\\\Upbeta$\"),\n (\"\\u0393\", r\"$\\\\Upgamma$\"),\n (\"\\u0394\", r\"$\\\\Updelta$\"),\n (\"\\u0395\", r\"$\\\\Upepsilon$\"),\n (\"\\u0396\", r\"$\\\\Upzeta$\"),\n (\"\\u0397\", r\"$\\\\Upeta$\"),\n (\"\\u0398\", r\"$\\\\Uptheta$\"),\n (\"\\u0399\", r\"$\\\\Upiota$\"),\n (\"\\u039a\", r\"$\\\\Upkappa$\"),\n (\"\\u039b\", r\"$\\\\Uplambda$\"),\n (\"\\u039c\", r\"$\\\\Upmu$\"),\n (\"\\u039d\", r\"$\\\\Upnu$\"),\n (\"\\u039e\", r\"$\\\\Upxi$\"),\n (\"\\u039f\", r\"$\\\\Upomekron$\"),\n (\"\\u03a0\", r\"$\\\\Uppi$\"),\n (\"\\u03a1\", r\"$\\\\Uprho$\"),\n (\"\\u03a3\", r\"$\\\\Upsigma$\"), # no \\u0302\n (\"\\u03a4\", r\"$\\\\Uptau$\"),\n (\"\\u03a5\", r\"$\\\\Upupsilon$\"),\n (\"\\u03a6\", r\"$\\\\Upvarphi$\"),\n (\"\\u03a7\", r\"$\\\\Upchi$\"),\n (\"\\u03a8\", r\"$\\\\Uppsi$\"),\n (\"\\u03a9\", r\"$\\\\Upomega$\"),\n # lowercase greek letters\n (\"\\u03b1\", r\"$\\\\upalpha$\"),\n (\"\\u03b2\", r\"$\\\\upbeta$\"),\n (\"\\u03b3\", r\"$\\\\upgamma$\"),\n (\"\\u03b4\", r\"$\\\\updelta$\"),\n (\"\\u03b5\", r\"$\\\\upepsilon$\"),\n (\"\\u03b6\", r\"$\\\\upzeta$\"),\n (\"\\u03b7\", r\"$\\\\upeta$\"),\n (\"\\u03b8\", r\"$\\\\uptheta$\"),\n (\"\\u03b9\", r\"$\\\\upiota$\"),\n (\"\\u03ba\", r\"$\\\\upkappa$\"),\n (\"\\u03bb\", r\"$\\\\uplambda$\"),\n (\"\\u03bc\", r\"$\\\\upmu$\"),\n (\"\\u03bd\", r\"$\\\\upnu$\"),\n (\"\\u03be\", r\"$\\\\upxi$\"),\n (\"\\u03bf\", r\"o\"), # no \\upomicron\n (\"\\u03c0\", r\"$\\\\uppi$\"),\n (\"\\u03c1\", r\"$\\\\uprho$\"),\n (\"\\u03c2\", r\"$\\\\upvarsigma$\"),\n (\"\\u03c3\", r\"$\\\\upsigma$\"),\n (\"\\u03c4\", r\"$\\\\uptau$\"),\n (\"\\u03c5\", r\"$\\\\upupsilon$\"),\n (\"\\u03c6\", r\"$\\\\upvarphi$\"),\n (\"\\u03c7\", r\"$\\\\upchi$\"),\n (\"\\u03c8\", r\"$\\\\uppsi$\"),\n (\"\\u03c9\", r\"$\\\\upomega$\"),\n # acutes, accents, etc...\n (\"\\u03ae\", r\"$\\\\acute{\\\\upeta}$\"),\n (\"\\u1e17\", r\"$\\\\acute{\\\\bar{\\\\mathrm{e}}}$\"),\n (\"\\u03ac\", r\"$\\\\acute{\\\\upalpha}$\"),\n (\"\\u00e1\", r\"$\\\\acute{\\\\mathrm{a}}$\"),\n (\"\\u03cc\", r\"$\\\\acute{o}$\"), # no \\upomicron\n (\"\\u014d\", r\"$\\\\bar{\\\\mathrm{o}}$\"),\n (\"\\u1f45\", r\"$\\\\acute{o}$\"), # no \\omicron\n ],\n }\n _html_style = {\n \"sep\": \"<p>\\n\",\n \"figwidth\": 'width=\"{width:.0f}\"',\n \"figure\": '<img src=\"{path}\" alt=\"{caption}\"{figwidth}>',\n \"header\": '<h{level} id=\"{anchor}\">{label}</h{level}>',\n \"link\": '<a href=\"{ref}\">{label}</a>',\n \"point\": \" <li>{point}</li>\\n\",\n \"points\": \" <ul>\\n {points}\\n </ul>\\n\",\n \"annotation\": \" <dd><strong>{key}:</strong>\\n{value} </dd>\\n\",\n \"substitutions\": [\n (r\"&\", r\"‒\"),\n (r\"<p>\", r\"<p>\\n\\n\"),\n (r\"\\u2018([^\\u2019]*)\\u2019\", r\"<q>\\1</q>\"),\n (r\"\\u2019\", r\"'\"),\n (r\"\\u2260\", r\"≠\"),\n (r\"\\u2264\", r\"≤\"),\n (r\"\\u2265\", r\"≥\"),\n (r\"\\u226A\", r\"&x226A;\"),\n (r\"\\u226B\", r\"&x226B;\"),\n (r'\"Y$', r\"\"), # strange noice added by owlready2\n ],\n }\n\n def __init__(self, onto, style=\"markdown\"):\n if isinstance(style, str):\n if style == \"markdown_tex\":\n style = self._markdown_style.copy()\n style.update(self._markdown_tex_extra_style)\n else:\n style = getattr(self, f\"_{style}_style\")\n self.onto = onto\n self.style = style\n self.url_regex = re.compile(r\"https?:\\/\\/[^\\s ]+\")\n\n def get_default_template(self):\n \"\"\"Returns default template.\"\"\"\n title = os.path.splitext(\n os.path.basename(self.onto.base_iri.rstrip(\"/#\"))\n )[0]\n irilink = self.style.get(\"link\", \"{name}\").format(\n iri=self.onto.base_iri,\n name=self.onto.base_iri,\n ref=self.onto.base_iri,\n label=self.onto.base_iri,\n lowerlabel=self.onto.base_iri,\n )\n template = dedent(\n \"\"\"\\\n %HEADER {title}\n Documentation of {irilink}\n\n %HEADER Relations level=2\n %ALL object_properties\n\n %HEADER Classes level=2\n %ALL classes\n\n %HEADER Individuals level=2\n %ALL individuals\n\n %HEADER Appendix level=1\n %HEADER \"Relation taxonomies\" level=2\n %ALLFIG object_properties\n\n %HEADER \"Class taxonomies\" level=2\n %ALLFIG classes\n \"\"\"\n ).format(ontology=self.onto, title=title, irilink=irilink)\n return template\n\n def get_header(self, label, header_level=1, anchor=None):\n \"\"\"Returns `label` formatted as a header of given level.\"\"\"\n header_style = self.style.get(\"header\", \"{label}\\n\")\n return header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor if anchor else label.lower().replace(\" \", \"-\"),\n )\n\n def get_figure(self, path, caption=\"\", width=None):\n \"\"\"Returns a formatted insert-figure-directive.\"\"\"\n figwidth_style = self.style.get(\"figwidth\", \"\")\n figure_style = self.style.get(\"figure\", \"\")\n figwidth = figwidth_style.format(width=width) if width else \"\"\n return figure_style.format(\n path=path, caption=caption, figwidth=figwidth\n )\n\n def itemdoc(\n self, item, header_level=3, show_disjoints=False\n ): # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n \"\"\"Returns documentation of `item`.\n\n Parameters\n ----------\n item : obj | label\n The class, individual or relation to document.\n header_level : int\n Header level. Defaults to 3.\n show_disjoints : Bool\n Whether to show `disjoint_with` relations.\n \"\"\"\n onto = self.onto\n if isinstance(item, str):\n item = self.onto.get_by_label(item)\n\n header_style = self.style.get(\"header\", \"{label}\\n\")\n link_style = self.style.get(\"link\", \"{name}\")\n point_style = self.style.get(\"point\", \"{point}\")\n points_style = self.style.get(\"points\", \"{points}\")\n annotation_style = self.style.get(\"annotation\", \"{key}: {value}\\n\")\n substitutions = self.style.get(\"substitutions\", [])\n\n # Logical \"sorting\" of annotations\n order = {\n \"definition\": \"00\",\n \"axiom\": \"01\",\n \"theorem\": \"02\",\n \"elucidation\": \"03\",\n \"domain\": \"04\",\n \"range\": \"05\",\n \"example\": \"06\",\n }\n\n doc = []\n\n # Header\n label = get_label(item)\n iriname = item.iri.partition(\"#\")[2]\n anchor = iriname if iriname else label.lower()\n doc.append(\n header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor,\n )\n )\n\n # Add warning about missing prefLabel\n if not hasattr(item, \"prefLabel\") or not item.prefLabel.first():\n doc.append(\n annotation_style.format(\n key=\"Warning\", value=\"Missing prefLabel\"\n )\n )\n\n # Add iri\n doc.append(\n annotation_style.format(\n key=\"IRI\",\n value=asstring(item.iri, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # Add annotations\n if isinstance(item, owlready2.Thing):\n annotations = item.get_individual_annotations()\n else:\n annotations = item.get_annotations()\n\n for key in sorted(\n annotations.keys(), key=lambda key: order.get(key, key)\n ):\n for value in annotations[key]:\n value = str(value)\n if self.url_regex.match(value):\n doc.append(\n annotation_style.format(\n key=key,\n value=asstring(value, link_style, ontology=onto),\n )\n )\n else:\n for reg, sub in substitutions:\n value = re.sub(reg, sub, value)\n doc.append(annotation_style.format(key=key, value=value))\n\n # ...add relations from is_a\n points = []\n non_prop = (\n owlready2.ThingClass, # owlready2.Restriction,\n owlready2.And,\n owlready2.Or,\n owlready2.Not,\n )\n for prop in item.is_a:\n if isinstance(prop, non_prop) or (\n isinstance(item, owlready2.PropertyClass)\n and isinstance(prop, owlready2.PropertyClass)\n ):\n points.append(\n point_style.format(\n point=\"is_a \"\n + asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n else:\n points.append(\n point_style.format(\n point=asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # ...add equivalent_to relations\n for entity in item.equivalent_to:\n points.append(\n point_style.format(\n point=\"equivalent_to \"\n + asstring(entity, link_style, ontology=onto)\n )\n )\n\n # ...add disjoint_with relations\n if show_disjoints and hasattr(item, \"disjoint_with\"):\n subjects = set(item.disjoint_with(reduce=True))\n points.append(\n point_style.format(\n point=\"disjoint_with \"\n + \", \".join(\n asstring(s, link_style, ontology=onto) for s in subjects\n ),\n ontology=onto,\n )\n )\n\n # ...add disjoint_unions\n if hasattr(item, \"disjoint_unions\"):\n for unions in item.disjoint_unions:\n string = \", \".join(\n asstring(u, link_style, ontology=onto) for u in unions\n )\n points.append(\n point_style.format(\n point=f\"disjoint_union_of {string}\", ontology=onto\n )\n )\n\n # ...add inverse_of relations\n if hasattr(item, \"inverse_property\") and item.inverse_property:\n points.append(\n point_style.format(\n point=\"inverse_of \"\n + asstring(item.inverse_property, link_style, ontology=onto)\n )\n )\n\n # ...add domain restrictions\n for domain in getattr(item, \"domain\", ()):\n points.append(\n point_style.format(\n point=\"domain \"\n + asstring(domain, link_style, ontology=onto)\n )\n )\n\n # ...add range restrictions\n for restriction in getattr(item, \"range\", ()):\n points.append(\n point_style.format(\n point=\"range \"\n + asstring(restriction, link_style, ontology=onto)\n )\n )\n\n # Add points (from is_a)\n if points:\n value = points_style.format(points=\"\".join(points), ontology=onto)\n doc.append(\n annotation_style.format(\n key=\"Subclass of\", value=value, ontology=onto\n )\n )\n\n # Instances (individuals)\n if hasattr(item, \"instances\"):\n points = []\n\n for instance in item.instances():\n if isinstance(instance.is_instance_of, property):\n warnings.warn(\n f'Ignoring instance \"{instance}\" which is both and '\n \"indivudual and class. Ontodoc does not support \"\n \"punning at the present moment.\"\n )\n continue\n if item in instance.is_instance_of:\n points.append(\n point_style.format(\n point=asstring(instance, link_style, ontology=onto),\n ontology=onto,\n )\n )\n if points:\n value = points_style.format(\n points=\"\".join(points), ontology=onto\n )\n doc.append(\n annotation_style.format(\n key=\"Individuals\", value=value, ontology=onto\n )\n )\n\n return \"\\n\".join(doc)\n\n def itemsdoc(self, items, header_level=3):\n \"\"\"Returns documentation of `items`.\"\"\"\n sep_style = self.style.get(\"sep\", \"\\n\")\n doc = []\n for item in items:\n doc.append(self.itemdoc(item, header_level))\n doc.append(sep_style.format(ontology=self.onto))\n return \"\\n\".join(doc)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.get_default_template","title":"get_default_template(self)
","text":"Returns default template.
Source code inontopy/ontodoc.py
def get_default_template(self):\n \"\"\"Returns default template.\"\"\"\n title = os.path.splitext(\n os.path.basename(self.onto.base_iri.rstrip(\"/#\"))\n )[0]\n irilink = self.style.get(\"link\", \"{name}\").format(\n iri=self.onto.base_iri,\n name=self.onto.base_iri,\n ref=self.onto.base_iri,\n label=self.onto.base_iri,\n lowerlabel=self.onto.base_iri,\n )\n template = dedent(\n \"\"\"\\\n %HEADER {title}\n Documentation of {irilink}\n\n %HEADER Relations level=2\n %ALL object_properties\n\n %HEADER Classes level=2\n %ALL classes\n\n %HEADER Individuals level=2\n %ALL individuals\n\n %HEADER Appendix level=1\n %HEADER \"Relation taxonomies\" level=2\n %ALLFIG object_properties\n\n %HEADER \"Class taxonomies\" level=2\n %ALLFIG classes\n \"\"\"\n ).format(ontology=self.onto, title=title, irilink=irilink)\n return template\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.get_figure","title":"get_figure(self, path, caption='', width=None)
","text":"Returns a formatted insert-figure-directive.
Source code inontopy/ontodoc.py
def get_figure(self, path, caption=\"\", width=None):\n \"\"\"Returns a formatted insert-figure-directive.\"\"\"\n figwidth_style = self.style.get(\"figwidth\", \"\")\n figure_style = self.style.get(\"figure\", \"\")\n figwidth = figwidth_style.format(width=width) if width else \"\"\n return figure_style.format(\n path=path, caption=caption, figwidth=figwidth\n )\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.get_header","title":"get_header(self, label, header_level=1, anchor=None)
","text":"Returns label
formatted as a header of given level.
ontopy/ontodoc.py
def get_header(self, label, header_level=1, anchor=None):\n \"\"\"Returns `label` formatted as a header of given level.\"\"\"\n header_style = self.style.get(\"header\", \"{label}\\n\")\n return header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor if anchor else label.lower().replace(\" \", \"-\"),\n )\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.itemdoc","title":"itemdoc(self, item, header_level=3, show_disjoints=False)
","text":"Returns documentation of item
.
item : obj | label The class, individual or relation to document. header_level : int Header level. Defaults to 3. show_disjoints : Bool Whether to show disjoint_with
relations.
ontopy/ontodoc.py
def itemdoc(\n self, item, header_level=3, show_disjoints=False\n): # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n \"\"\"Returns documentation of `item`.\n\n Parameters\n ----------\n item : obj | label\n The class, individual or relation to document.\n header_level : int\n Header level. Defaults to 3.\n show_disjoints : Bool\n Whether to show `disjoint_with` relations.\n \"\"\"\n onto = self.onto\n if isinstance(item, str):\n item = self.onto.get_by_label(item)\n\n header_style = self.style.get(\"header\", \"{label}\\n\")\n link_style = self.style.get(\"link\", \"{name}\")\n point_style = self.style.get(\"point\", \"{point}\")\n points_style = self.style.get(\"points\", \"{points}\")\n annotation_style = self.style.get(\"annotation\", \"{key}: {value}\\n\")\n substitutions = self.style.get(\"substitutions\", [])\n\n # Logical \"sorting\" of annotations\n order = {\n \"definition\": \"00\",\n \"axiom\": \"01\",\n \"theorem\": \"02\",\n \"elucidation\": \"03\",\n \"domain\": \"04\",\n \"range\": \"05\",\n \"example\": \"06\",\n }\n\n doc = []\n\n # Header\n label = get_label(item)\n iriname = item.iri.partition(\"#\")[2]\n anchor = iriname if iriname else label.lower()\n doc.append(\n header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor,\n )\n )\n\n # Add warning about missing prefLabel\n if not hasattr(item, \"prefLabel\") or not item.prefLabel.first():\n doc.append(\n annotation_style.format(\n key=\"Warning\", value=\"Missing prefLabel\"\n )\n )\n\n # Add iri\n doc.append(\n annotation_style.format(\n key=\"IRI\",\n value=asstring(item.iri, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # Add annotations\n if isinstance(item, owlready2.Thing):\n annotations = item.get_individual_annotations()\n else:\n annotations = item.get_annotations()\n\n for key in sorted(\n annotations.keys(), key=lambda key: order.get(key, key)\n ):\n for value in annotations[key]:\n value = str(value)\n if self.url_regex.match(value):\n doc.append(\n annotation_style.format(\n key=key,\n value=asstring(value, link_style, ontology=onto),\n )\n )\n else:\n for reg, sub in substitutions:\n value = re.sub(reg, sub, value)\n doc.append(annotation_style.format(key=key, value=value))\n\n # ...add relations from is_a\n points = []\n non_prop = (\n owlready2.ThingClass, # owlready2.Restriction,\n owlready2.And,\n owlready2.Or,\n owlready2.Not,\n )\n for prop in item.is_a:\n if isinstance(prop, non_prop) or (\n isinstance(item, owlready2.PropertyClass)\n and isinstance(prop, owlready2.PropertyClass)\n ):\n points.append(\n point_style.format(\n point=\"is_a \"\n + asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n else:\n points.append(\n point_style.format(\n point=asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # ...add equivalent_to relations\n for entity in item.equivalent_to:\n points.append(\n point_style.format(\n point=\"equivalent_to \"\n + asstring(entity, link_style, ontology=onto)\n )\n )\n\n # ...add disjoint_with relations\n if show_disjoints and hasattr(item, \"disjoint_with\"):\n subjects = set(item.disjoint_with(reduce=True))\n points.append(\n point_style.format(\n point=\"disjoint_with \"\n + \", \".join(\n asstring(s, link_style, ontology=onto) for s in subjects\n ),\n ontology=onto,\n )\n )\n\n # ...add disjoint_unions\n if hasattr(item, \"disjoint_unions\"):\n for unions in item.disjoint_unions:\n string = \", \".join(\n asstring(u, link_style, ontology=onto) for u in unions\n )\n points.append(\n point_style.format(\n point=f\"disjoint_union_of {string}\", ontology=onto\n )\n )\n\n # ...add inverse_of relations\n if hasattr(item, \"inverse_property\") and item.inverse_property:\n points.append(\n point_style.format(\n point=\"inverse_of \"\n + asstring(item.inverse_property, link_style, ontology=onto)\n )\n )\n\n # ...add domain restrictions\n for domain in getattr(item, \"domain\", ()):\n points.append(\n point_style.format(\n point=\"domain \"\n + asstring(domain, link_style, ontology=onto)\n )\n )\n\n # ...add range restrictions\n for restriction in getattr(item, \"range\", ()):\n points.append(\n point_style.format(\n point=\"range \"\n + asstring(restriction, link_style, ontology=onto)\n )\n )\n\n # Add points (from is_a)\n if points:\n value = points_style.format(points=\"\".join(points), ontology=onto)\n doc.append(\n annotation_style.format(\n key=\"Subclass of\", value=value, ontology=onto\n )\n )\n\n # Instances (individuals)\n if hasattr(item, \"instances\"):\n points = []\n\n for instance in item.instances():\n if isinstance(instance.is_instance_of, property):\n warnings.warn(\n f'Ignoring instance \"{instance}\" which is both and '\n \"indivudual and class. Ontodoc does not support \"\n \"punning at the present moment.\"\n )\n continue\n if item in instance.is_instance_of:\n points.append(\n point_style.format(\n point=asstring(instance, link_style, ontology=onto),\n ontology=onto,\n )\n )\n if points:\n value = points_style.format(\n points=\"\".join(points), ontology=onto\n )\n doc.append(\n annotation_style.format(\n key=\"Individuals\", value=value, ontology=onto\n )\n )\n\n return \"\\n\".join(doc)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.itemsdoc","title":"itemsdoc(self, items, header_level=3)
","text":"Returns documentation of items
.
ontopy/ontodoc.py
def itemsdoc(self, items, header_level=3):\n \"\"\"Returns documentation of `items`.\"\"\"\n sep_style = self.style.get(\"sep\", \"\\n\")\n doc = []\n for item in items:\n doc.append(self.itemdoc(item, header_level))\n doc.append(sep_style.format(ontology=self.onto))\n return \"\\n\".join(doc)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.append_pandoc_options","title":"append_pandoc_options(options, updates)
","text":"Append updates
to pandoc options options
.
options : sequence Sequence with initial Pandoc options. updates : sequence of str Sequence of strings of the form \"--longoption=value\", where longoption
is a valid pandoc long option and value
is the new value. The \"=value\" part is optional.
Strings of the form \"no-longoption\" will filter out \"--longoption\"\nfrom `options`.\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.append_pandoc_options--returns","title":"Returns","text":"new_options : list Updated pandoc options.
Source code inontopy/ontodoc.py
def append_pandoc_options(options, updates):\n \"\"\"Append `updates` to pandoc options `options`.\n\n Parameters\n ----------\n options : sequence\n Sequence with initial Pandoc options.\n updates : sequence of str\n Sequence of strings of the form \"--longoption=value\", where\n ``longoption`` is a valid pandoc long option and ``value`` is the\n new value. The \"=value\" part is optional.\n\n Strings of the form \"no-longoption\" will filter out \"--longoption\"\n from `options`.\n\n Returns\n -------\n new_options : list\n Updated pandoc options.\n \"\"\"\n # Valid pandoc options starting with \"--no-XXX\"\n no_options = set(\"no-highlight\")\n\n if not updates:\n return list(options)\n\n curated_updates = {}\n for update in updates:\n key, sep, value = update.partition(\"=\")\n curated_updates[key.lstrip(\"-\")] = value if sep else None\n filter_out = set(\n _\n for _ in curated_updates\n if _.startswith(\"no-\") and _ not in no_options\n )\n _filter_out = set(f\"--{_[3:]}\" for _ in filter_out)\n new_options = [\n opt for opt in options if opt.partition(\"=\")[0] not in _filter_out\n ]\n new_options.extend(\n [\n f\"--{key}\" if value is None else f\"--{key}={value}\"\n for key, value in curated_updates.items()\n if key not in filter_out\n ]\n )\n return new_options\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_docpp","title":"get_docpp(ontodoc, infile, figdir='genfigs', figformat='png', maxwidth=None, imported=False)
","text":"Read infile
and return a new docpp instance.
ontopy/ontodoc.py
def get_docpp( # pylint: disable=too-many-arguments\n ontodoc,\n infile,\n figdir=\"genfigs\",\n figformat=\"png\",\n maxwidth=None,\n imported=False,\n):\n \"\"\"Read `infile` and return a new docpp instance.\"\"\"\n if infile:\n with open(infile, \"rt\") as handle:\n template = handle.read()\n basedir = os.path.dirname(infile)\n else:\n template = ontodoc.get_default_template()\n basedir = \".\"\n\n docpp = DocPP(\n template,\n ontodoc,\n basedir=basedir,\n figdir=figdir,\n figformat=figformat,\n maxwidth=maxwidth,\n imported=imported,\n )\n\n return docpp\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_figformat","title":"get_figformat(fmt)
","text":"Infer preferred figure format from output format.
Source code inontopy/ontodoc.py
def get_figformat(fmt):\n \"\"\"Infer preferred figure format from output format.\"\"\"\n if fmt == \"pdf\":\n figformat = \"pdf\" # XXX\n elif \"html\" in fmt:\n figformat = \"svg\"\n else:\n figformat = \"png\"\n return figformat\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_maxwidth","title":"get_maxwidth(fmt)
","text":"Infer preferred max figure width from output format.
Source code inontopy/ontodoc.py
def get_maxwidth(fmt):\n \"\"\"Infer preferred max figure width from output format.\"\"\"\n if fmt == \"pdf\":\n maxwidth = 668\n else:\n maxwidth = 1024\n return maxwidth\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_options","title":"get_options(opts, **kwargs)
","text":"Returns a dict with options from the sequence opts
with \"name=value\" pairs. Valid option names and default values are provided with the keyword arguments.
ontopy/ontodoc.py
def get_options(opts, **kwargs):\n \"\"\"Returns a dict with options from the sequence `opts` with\n \"name=value\" pairs. Valid option names and default values are\n provided with the keyword arguments.\"\"\"\n res = AttributeDict(kwargs)\n for opt in opts:\n if \"=\" not in opt:\n raise InvalidTemplateError(\n f'Missing \"=\" in template option: {opt!r}'\n )\n name, value = opt.split(\"=\", 1)\n if name not in res:\n raise InvalidTemplateError(f\"Invalid template option: {name!r}\")\n res_type = type(res[name])\n res[name] = res_type(value)\n return res\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_style","title":"get_style(fmt)
","text":"Infer style from output format.
Source code inontopy/ontodoc.py
def get_style(fmt):\n \"\"\"Infer style from output format.\"\"\"\n if fmt == \"simple-html\":\n style = \"html\"\n elif fmt in (\"tex\", \"latex\", \"pdf\"):\n style = \"markdown_tex\"\n else:\n style = \"markdown\"\n return style\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.load_pandoc_option_file","title":"load_pandoc_option_file(yamlfile)
","text":"Loads pandoc options from yamlfile
and return a list with corresponding pandoc command line arguments.
ontopy/ontodoc.py
def load_pandoc_option_file(yamlfile):\n \"\"\"Loads pandoc options from `yamlfile` and return a list with\n corresponding pandoc command line arguments.\"\"\"\n with open(yamlfile) as handle:\n pandoc_options = yaml.safe_load(handle)\n options = pandoc_options.pop(\"input-files\", [])\n variables = pandoc_options.pop(\"variables\", {})\n\n for key, value in pandoc_options.items():\n if isinstance(value, bool):\n if value:\n options.append(f\"--{key}\")\n else:\n options.append(f\"--{key}={value}\")\n\n for key, value in variables.items():\n if key == \"date\" and value == \"now\":\n value = time.strftime(\"%B %d, %Y\")\n options.append(f\"--variable={key}:{value}\")\n\n return options\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.run_pandoc","title":"run_pandoc(genfile, outfile, fmt, pandoc_option_files=(), pandoc_options=(), verbose=True)
","text":"Runs pandoc.
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.run_pandoc--parameters","title":"Parameters","text":"genfile : str Name of markdown input file. outfile : str Output file name. fmt : str Output format. pandoc_option_files : sequence List of files with additional pandoc options. Default is to read \"pandoc-options.yaml\" and \"pandoc-FORMAT-options.yml\", where FORMAT
is the output format. pandoc_options : sequence Additional pandoc options overriding options read from pandoc_option_files
. verbose : bool Whether to print the pandoc command before execution.
subprocess.CalledProcessError If the pandoc process returns with non-zero status. The returncode
attribute will hold the exit code.
ontopy/ontodoc.py
def run_pandoc( # pylint: disable=too-many-arguments\n genfile,\n outfile,\n fmt,\n pandoc_option_files=(),\n pandoc_options=(),\n verbose=True,\n):\n \"\"\"Runs pandoc.\n\n Parameters\n ----------\n genfile : str\n Name of markdown input file.\n outfile : str\n Output file name.\n fmt : str\n Output format.\n pandoc_option_files : sequence\n List of files with additional pandoc options. Default is to read\n \"pandoc-options.yaml\" and \"pandoc-FORMAT-options.yml\", where\n `FORMAT` is the output format.\n pandoc_options : sequence\n Additional pandoc options overriding options read from\n `pandoc_option_files`.\n verbose : bool\n Whether to print the pandoc command before execution.\n\n Raises\n ------\n subprocess.CalledProcessError\n If the pandoc process returns with non-zero status. The `returncode`\n attribute will hold the exit code.\n \"\"\"\n # Create pandoc argument list\n args = [genfile]\n files = [\"pandoc-options.yaml\", f\"pandoc-{fmt}-options.yaml\"]\n if pandoc_option_files:\n files = pandoc_option_files\n for fname in files:\n if os.path.exists(fname):\n args.extend(load_pandoc_option_file(fname))\n else:\n warnings.warn(f\"missing pandoc option file: {fname}\")\n\n # Update pandoc argument list\n args = append_pandoc_options(args, pandoc_options)\n\n # pdf output requires a special attention...\n if fmt == \"pdf\":\n pdf_engine = \"pdflatex\"\n for arg in args:\n if arg.startswith(\"--pdf-engine\"):\n pdf_engine = arg.split(\"=\", 1)[1]\n break\n with TemporaryDirectory() as tmpdir:\n run_pandoc_pdf(tmpdir, pdf_engine, outfile, args, verbose=verbose)\n else:\n args.append(f\"--output={outfile}\")\n cmd = [\"pandoc\"] + args\n if verbose:\n print()\n print(\"* Executing command:\")\n print(\" \".join(shlex.quote(_) for _ in cmd))\n subprocess.check_call(cmd) # nosec\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.run_pandoc_pdf","title":"run_pandoc_pdf(latex_dir, pdf_engine, outfile, args, verbose=True)
","text":"Run pandoc for pdf generation.
Source code inontopy/ontodoc.py
def run_pandoc_pdf(latex_dir, pdf_engine, outfile, args, verbose=True):\n \"\"\"Run pandoc for pdf generation.\"\"\"\n basename = os.path.join(\n latex_dir, os.path.splitext(os.path.basename(outfile))[0]\n )\n\n # Run pandoc\n texfile = basename + \".tex\"\n args.append(f\"--output={texfile}\")\n cmd = [\"pandoc\"] + args\n if verbose:\n print()\n print(\"* Executing commands:\")\n print(\" \".join(shlex.quote(s) for s in cmd))\n subprocess.check_call(cmd) # nosec\n\n # Fixing tex output\n texfile2 = basename + \"2.tex\"\n with open(texfile, \"rt\") as handle:\n content = handle.read().replace(r\"\\$\\Uptheta\\$\", r\"$\\Uptheta$\")\n with open(texfile2, \"wt\") as handle:\n handle.write(content)\n\n # Run latex\n pdffile = basename + \"2.pdf\"\n cmd = [\n pdf_engine,\n texfile2,\n \"-halt-on-error\",\n f\"-output-directory={latex_dir}\",\n ]\n if verbose:\n print()\n print(\" \".join(shlex.quote(s) for s in cmd))\n output = subprocess.check_output(cmd, timeout=60) # nosec\n output = subprocess.check_output(cmd, timeout=60) # nosec\n\n # Workaround for non-working \"-output-directory\" latex option\n if not os.path.exists(pdffile):\n if os.path.exists(os.path.basename(pdffile)):\n pdffile = os.path.basename(pdffile)\n for ext in \"aux\", \"out\", \"toc\", \"log\":\n filename = os.path.splitext(pdffile)[0] + \".\" + ext\n if os.path.exists(filename):\n os.remove(filename)\n else:\n print()\n print(output)\n print()\n raise RuntimeError(\"latex did not produce pdf file: \" + pdffile)\n\n # Copy pdffile\n if not os.path.exists(outfile) or not os.path.samefile(pdffile, outfile):\n if verbose:\n print()\n print(f\"move {pdffile} to {outfile}\")\n shutil.move(pdffile, outfile)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/","title":"ontodoc_rst","text":"A module for documenting ontologies.
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation","title":" ModuleDocumentation
","text":"Class for documentating a module in an ontology.
Parameters:
Name Type Description Defaultontology
Optional[Ontology]
Ontology to include in the generated documentation. All entities in this ontology will be included.
None
entities
Optional[Iterable[Entity]]
Explicit listing of entities (classes, properties, individuals, datatypes) to document. Normally not needed.
None
title
Optional[str]
Header title. Be default it is inferred from title of
None
iri_regex
Optional[str]
A regular expression that the IRI of documented entities should match.
None
Source code in ontopy/ontodoc_rst.py
class ModuleDocumentation:\n \"\"\"Class for documentating a module in an ontology.\n\n Arguments:\n ontology: Ontology to include in the generated documentation.\n All entities in this ontology will be included.\n entities: Explicit listing of entities (classes, properties,\n individuals, datatypes) to document. Normally not needed.\n title: Header title. Be default it is inferred from title of\n iri_regex: A regular expression that the IRI of documented entities\n should match.\n \"\"\"\n\n def __init__(\n self,\n ontology: \"Optional[Ontology]\" = None,\n entities: \"Optional[Iterable[Entity]]\" = None,\n title: \"Optional[str]\" = None,\n iri_regex: \"Optional[str]\" = None,\n ) -> None:\n self.ontology = ontology\n self.title = title\n self.iri_regex = iri_regex\n self.graph = (\n ontology.world.as_rdflib_graph() if ontology else rdflib.Graph()\n )\n self.classes = set()\n self.object_properties = set()\n self.data_properties = set()\n self.annotation_properties = set()\n self.individuals = set()\n self.datatypes = set()\n\n if ontology:\n self.add_ontology(ontology)\n\n if entities:\n for entity in entities:\n self.add_entity(entity)\n\n def nonempty(self) -> bool:\n \"\"\"Returns whether the module has any classes, properties, individuals\n or datatypes.\"\"\"\n return (\n self.classes\n or self.object_properties\n or self.data_properties\n or self.annotation_properties\n or self.individuals\n or self.datatypes\n )\n\n def add_entity(self, entity: \"Entity\") -> None:\n \"\"\"Add `entity` (class, property, individual, datatype) to list of\n entities to document.\n \"\"\"\n if self.iri_regex and not re.match(self.iri_regex, entity.iri):\n return\n\n if isinstance(entity, owlready2.ThingClass):\n self.classes.add(entity)\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.DataPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.Thing):\n if (\n hasattr(entity.__class__, \"iri\")\n and entity.__class__.iri\n == \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n ):\n self.datatypes.add(entity)\n else:\n self.individuals.add(entity)\n\n def add_ontology(\n self, ontology: \"Ontology\", imported: bool = False\n ) -> None:\n \"\"\"Add ontology to documentation.\"\"\"\n for entity in ontology.get_entities(imported=imported):\n self.add_entity(entity)\n\n def get_title(self) -> str:\n \"\"\"Return a module title.\"\"\"\n iri = self.ontology.base_iri.rstrip(\"#/\")\n if self.title:\n title = self.title\n elif self.ontology:\n title = self.graph.value(URIRef(iri), DCTERMS.title)\n if not title:\n title = iri.rsplit(\"/\", 1)[-1]\n return title\n\n def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n heading = f\"Module: {self.get_title()}\"\n return f\"\"\"\n\n{heading.title()}\n{'='*len(heading)}\n\n\"\"\"\n\n def get_refdoc(\n self,\n subsections: str = \"all\",\n header: bool = True,\n ) -> str:\n # pylint: disable=too-many-branches,too-many-locals\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n subsections: Comma-separated list of subsections to include in\n the returned documentation. Valid subsection names are:\n - classes\n - object_properties\n - data_properties\n - annotation_properties\n - individuals\n - datatypes\n If \"all\", all subsections will be documented.\n header: Whether to also include the header in the returned\n documentation.\n\n Returns:\n String with reference documentation.\n \"\"\"\n # pylint: disable=too-many-nested-blocks\n if subsections == \"all\":\n subsections = (\n \"classes,object_properties,data_properties,\"\n \"annotation_properties,individuals,datatypes\"\n )\n\n maps = {\n \"classes\": self.classes,\n \"object_properties\": self.object_properties,\n \"data_properties\": self.data_properties,\n \"annotation_properties\": self.annotation_properties,\n \"individuals\": self.individuals,\n \"datatypes\": self.datatypes,\n }\n lines = []\n\n if header:\n lines.append(self.get_header())\n\n def add_header(name):\n clsname = f\"element-table-{name.lower().replace(' ', '-')}\"\n lines.extend(\n [\n \" <tr>\",\n f' <th class=\"{clsname}\" colspan=\"2\">{name}</th>',\n \" </tr>\",\n ]\n )\n\n def add_keyvalue(key, value, escape=True, htmllink=True):\n \"\"\"Help function for adding a key-value row to table.\"\"\"\n if escape:\n value = html.escape(str(value))\n if htmllink:\n value = re.sub(\n r\"(https?://[^\\s]+)\", r'<a href=\"\\1\">\\1</a>', value\n )\n value = value.replace(\"\\n\", \"<br>\")\n lines.extend(\n [\n \" <tr>\",\n ' <td class=\"element-table-key\">'\n f'<span class=\"element-table-key\">'\n f\"{key.title()}</span></td>\",\n f' <td class=\"element-table-value\">{value}</td>',\n \" </tr>\",\n ]\n )\n\n for subsection in subsections.split(\",\"):\n if maps[subsection]:\n moduletitle = self.get_title().lower().replace(\" \", \"-\")\n anchor = f\"{moduletitle}-{subsection.replace('_', '-')}\"\n lines.extend(\n [\n \"\",\n f\".. _{anchor}:\",\n \"\",\n subsection.replace(\"_\", \" \").title(),\n \"-\" * len(subsection),\n \"\",\n ]\n )\n for entity in sorted(maps[subsection], key=get_label):\n label = get_label(entity)\n lines.extend(\n [\n \".. raw:: html\",\n \"\",\n f' <div id=\"{entity.name}\"></div>',\n \"\",\n f\"{label}\",\n \"^\" * len(label),\n \"\",\n \".. raw:: html\",\n \"\",\n ' <table class=\"element-table\">',\n ]\n )\n add_keyvalue(\"IRI\", entity.iri)\n if hasattr(entity, \"get_annotations\"):\n add_header(\"Annotations\")\n for key, value in entity.get_annotations().items():\n if isinstance(value, list):\n for val in value:\n add_keyvalue(key, val)\n else:\n add_keyvalue(key, value)\n if entity.is_a or entity.equivalent_to:\n add_header(\"Formal description\")\n for r in entity.equivalent_to:\n\n # FIXME: Skip restrictions with value None to work\n # around bug in Owlready2 that doesn't handle custom\n # datatypes in restrictions correctly...\n if hasattr(r, \"value\") and r.value is None:\n continue\n\n add_keyvalue(\n \"Equivalent To\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n for r in entity.is_a:\n add_keyvalue(\n \"Subclass Of\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n\n lines.extend([\" </table>\", \"\"])\n\n return \"\\n\".join(lines)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.add_entity","title":"add_entity(self, entity)
","text":"Add entity
(class, property, individual, datatype) to list of entities to document.
ontopy/ontodoc_rst.py
def add_entity(self, entity: \"Entity\") -> None:\n \"\"\"Add `entity` (class, property, individual, datatype) to list of\n entities to document.\n \"\"\"\n if self.iri_regex and not re.match(self.iri_regex, entity.iri):\n return\n\n if isinstance(entity, owlready2.ThingClass):\n self.classes.add(entity)\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.DataPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.Thing):\n if (\n hasattr(entity.__class__, \"iri\")\n and entity.__class__.iri\n == \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n ):\n self.datatypes.add(entity)\n else:\n self.individuals.add(entity)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.add_ontology","title":"add_ontology(self, ontology, imported=False)
","text":"Add ontology to documentation.
Source code inontopy/ontodoc_rst.py
def add_ontology(\n self, ontology: \"Ontology\", imported: bool = False\n) -> None:\n \"\"\"Add ontology to documentation.\"\"\"\n for entity in ontology.get_entities(imported=imported):\n self.add_entity(entity)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.get_header","title":"get_header(self)
","text":"Return a the reStructuredText header as a string.
Source code inontopy/ontodoc_rst.py
def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n heading = f\"Module: {self.get_title()}\"\n return f\"\"\"\n\n{heading.title()}\n{'='*len(heading)}\n\n\"\"\"\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.get_refdoc","title":"get_refdoc(self, subsections='all', header=True)
","text":"Return reference documentation of all module entities.
Parameters:
Name Type Description Defaultsubsections
str
Comma-separated list of subsections to include in the returned documentation. Valid subsection names are: - classes - object_properties - data_properties - annotation_properties - individuals - datatypes If \"all\", all subsections will be documented.
'all'
header
bool
Whether to also include the header in the returned documentation.
True
Returns:
Type Descriptionstr
String with reference documentation.
Source code inontopy/ontodoc_rst.py
def get_refdoc(\n self,\n subsections: str = \"all\",\n header: bool = True,\n) -> str:\n # pylint: disable=too-many-branches,too-many-locals\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n subsections: Comma-separated list of subsections to include in\n the returned documentation. Valid subsection names are:\n - classes\n - object_properties\n - data_properties\n - annotation_properties\n - individuals\n - datatypes\n If \"all\", all subsections will be documented.\n header: Whether to also include the header in the returned\n documentation.\n\n Returns:\n String with reference documentation.\n \"\"\"\n # pylint: disable=too-many-nested-blocks\n if subsections == \"all\":\n subsections = (\n \"classes,object_properties,data_properties,\"\n \"annotation_properties,individuals,datatypes\"\n )\n\n maps = {\n \"classes\": self.classes,\n \"object_properties\": self.object_properties,\n \"data_properties\": self.data_properties,\n \"annotation_properties\": self.annotation_properties,\n \"individuals\": self.individuals,\n \"datatypes\": self.datatypes,\n }\n lines = []\n\n if header:\n lines.append(self.get_header())\n\n def add_header(name):\n clsname = f\"element-table-{name.lower().replace(' ', '-')}\"\n lines.extend(\n [\n \" <tr>\",\n f' <th class=\"{clsname}\" colspan=\"2\">{name}</th>',\n \" </tr>\",\n ]\n )\n\n def add_keyvalue(key, value, escape=True, htmllink=True):\n \"\"\"Help function for adding a key-value row to table.\"\"\"\n if escape:\n value = html.escape(str(value))\n if htmllink:\n value = re.sub(\n r\"(https?://[^\\s]+)\", r'<a href=\"\\1\">\\1</a>', value\n )\n value = value.replace(\"\\n\", \"<br>\")\n lines.extend(\n [\n \" <tr>\",\n ' <td class=\"element-table-key\">'\n f'<span class=\"element-table-key\">'\n f\"{key.title()}</span></td>\",\n f' <td class=\"element-table-value\">{value}</td>',\n \" </tr>\",\n ]\n )\n\n for subsection in subsections.split(\",\"):\n if maps[subsection]:\n moduletitle = self.get_title().lower().replace(\" \", \"-\")\n anchor = f\"{moduletitle}-{subsection.replace('_', '-')}\"\n lines.extend(\n [\n \"\",\n f\".. _{anchor}:\",\n \"\",\n subsection.replace(\"_\", \" \").title(),\n \"-\" * len(subsection),\n \"\",\n ]\n )\n for entity in sorted(maps[subsection], key=get_label):\n label = get_label(entity)\n lines.extend(\n [\n \".. raw:: html\",\n \"\",\n f' <div id=\"{entity.name}\"></div>',\n \"\",\n f\"{label}\",\n \"^\" * len(label),\n \"\",\n \".. raw:: html\",\n \"\",\n ' <table class=\"element-table\">',\n ]\n )\n add_keyvalue(\"IRI\", entity.iri)\n if hasattr(entity, \"get_annotations\"):\n add_header(\"Annotations\")\n for key, value in entity.get_annotations().items():\n if isinstance(value, list):\n for val in value:\n add_keyvalue(key, val)\n else:\n add_keyvalue(key, value)\n if entity.is_a or entity.equivalent_to:\n add_header(\"Formal description\")\n for r in entity.equivalent_to:\n\n # FIXME: Skip restrictions with value None to work\n # around bug in Owlready2 that doesn't handle custom\n # datatypes in restrictions correctly...\n if hasattr(r, \"value\") and r.value is None:\n continue\n\n add_keyvalue(\n \"Equivalent To\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n for r in entity.is_a:\n add_keyvalue(\n \"Subclass Of\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n\n lines.extend([\" </table>\", \"\"])\n\n return \"\\n\".join(lines)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.get_title","title":"get_title(self)
","text":"Return a module title.
Source code inontopy/ontodoc_rst.py
def get_title(self) -> str:\n \"\"\"Return a module title.\"\"\"\n iri = self.ontology.base_iri.rstrip(\"#/\")\n if self.title:\n title = self.title\n elif self.ontology:\n title = self.graph.value(URIRef(iri), DCTERMS.title)\n if not title:\n title = iri.rsplit(\"/\", 1)[-1]\n return title\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.nonempty","title":"nonempty(self)
","text":"Returns whether the module has any classes, properties, individuals or datatypes.
Source code inontopy/ontodoc_rst.py
def nonempty(self) -> bool:\n \"\"\"Returns whether the module has any classes, properties, individuals\n or datatypes.\"\"\"\n return (\n self.classes\n or self.object_properties\n or self.data_properties\n or self.annotation_properties\n or self.individuals\n or self.datatypes\n )\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation","title":" OntologyDocumentation
","text":"Documentation for an ontology with a common namespace.
Parameters:
Name Type Description Defaultontologies
Iterable[Ontology]
Ontologies to include in the generated documentation. All entities in these ontologies will be included.
requiredimported
bool
Whether to include imported ontologies.
True
recursive
bool
Whether to recursively import all imported ontologies. Implies recursive=True
.
False
iri_regex
Optional[str]
A regular expression that the IRI of documented entities should match.
None
Source code in ontopy/ontodoc_rst.py
class OntologyDocumentation:\n \"\"\"Documentation for an ontology with a common namespace.\n\n Arguments:\n ontologies: Ontologies to include in the generated documentation.\n All entities in these ontologies will be included.\n imported: Whether to include imported ontologies.\n recursive: Whether to recursively import all imported ontologies.\n Implies `recursive=True`.\n iri_regex: A regular expression that the IRI of documented entities\n should match.\n \"\"\"\n\n def __init__(\n self,\n ontologies: \"Iterable[Ontology]\",\n imported: bool = True,\n recursive: bool = False,\n iri_regex: \"Optional[str]\" = None,\n ) -> None:\n if isinstance(ontologies, (Ontology, str, Path)):\n ontologies = [ontologies]\n\n if recursive:\n imported = True\n\n self.iri_regex = iri_regex\n self.module_documentations = []\n\n # Explicitly included ontologies\n included_ontologies = {}\n for onto in ontologies:\n if isinstance(onto, (str, Path)):\n onto = get_ontology(onto).load()\n elif not isinstance(onto, Ontology):\n raise TypeError(\n \"expected ontology as an IRI, Path or Ontology object, \"\n f\"got: {onto}\"\n )\n if onto.base_iri not in included_ontologies:\n included_ontologies[onto.base_iri] = onto\n\n # Indirectly included ontologies (imported)\n if imported:\n for onto in list(included_ontologies.values()):\n for o in onto.get_imported_ontologies(recursive=recursive):\n if o.base_iri not in included_ontologies:\n included_ontologies[o.base_iri] = o\n\n # Module documentations\n for onto in included_ontologies.values():\n self.module_documentations.append(\n ModuleDocumentation(onto, iri_regex=iri_regex)\n )\n\n def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n return \"\"\"\n==========\nReferences\n==========\n\"\"\"\n\n def get_refdoc(self, header: bool = True, subsections: str = \"all\") -> str:\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n header: Whether to also include the header in the returned\n documentation.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n\n Returns:\n String with reference documentation.\n \"\"\"\n moduledocs = []\n if header:\n moduledocs.append(self.get_header())\n moduledocs.extend(\n md.get_refdoc(subsections=subsections)\n for md in self.module_documentations\n if md.nonempty()\n )\n return \"\\n\".join(moduledocs)\n\n def top_ontology(self) -> Ontology:\n \"\"\"Return the top-level ontology.\"\"\"\n return self.module_documentations[0].ontology\n\n def write_refdoc(self, docfile=None, subsections=\"all\"):\n \"\"\"Write reference documentation to disk.\n\n Arguments:\n docfile: Name of file to write to. Defaults to the name of\n the top ontology with extension `.rst`.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n \"\"\"\n if not docfile:\n docfile = self.top_ontology().name + \".rst\"\n Path(docfile).write_text(\n self.get_refdoc(subsections=subsections), encoding=\"utf8\"\n )\n\n def write_index_template(\n self, indexfile=\"index.rst\", docfile=None, overwrite=False\n ):\n \"\"\"Write a basic template index.rst file to disk.\n\n Arguments:\n indexfile: Name of index file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n docname = Path(docfile).stem if docfile else self.top_ontology().name\n content = f\"\"\"\n.. toctree::\n :includehidden:\n :hidden:\n\n Reference Index <{docname}>\n\n\"\"\"\n outpath = Path(indexfile)\n if not overwrite and outpath.exists():\n warnings.warn(f\"index.rst file already exists: {outpath}\")\n return\n\n outpath.write_text(content, encoding=\"utf8\")\n\n def write_conf_template(\n self, conffile=\"conf.py\", docfile=None, overwrite=False\n ):\n \"\"\"Write basic template sphinx conf.py file to disk.\n\n Arguments:\n conffile: Name of configuration file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n # pylint: disable=redefined-builtin\n md = self.module_documentations[0]\n\n iri = md.ontology.base_iri.rstrip(\"#/\")\n authors = sorted(md.graph.objects(URIRef(iri), DCTERMS.creator))\n license = md.graph.value(URIRef(iri), DCTERMS.license, default=None)\n release = md.graph.value(URIRef(iri), OWL.versionInfo, default=\"1.0\")\n\n author = \", \".join(a.value for a in authors) if authors else \"<AUTHOR>\"\n copyright = license if license else f\"{time.strftime('%Y')}, {author}\"\n\n content = f\"\"\"\n# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Project information -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\n\nproject = '{md.ontology.name}'\ncopyright = '{copyright}'\nauthor = '{author}'\nrelease = '{release}'\n\n# -- General configuration ---------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n\nextensions = []\n\ntemplates_path = ['_templates']\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n\n\n# -- Options for HTML output -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n\nhtml_theme = 'alabaster'\nhtml_static_path = ['_static']\n\"\"\"\n if not conffile:\n conffile = Path(docfile).with_name(\"conf.py\")\n if overwrite and conffile.exists():\n warnings.warn(f\"conf.py file already exists: {conffile}\")\n return\n\n conffile.write_text(content, encoding=\"utf8\")\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.get_header","title":"get_header(self)
","text":"Return a the reStructuredText header as a string.
Source code inontopy/ontodoc_rst.py
def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n return \"\"\"\n==========\nReferences\n==========\n\"\"\"\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.get_refdoc","title":"get_refdoc(self, header=True, subsections='all')
","text":"Return reference documentation of all module entities.
Parameters:
Name Type Description Defaultheader
bool
Whether to also include the header in the returned documentation.
True
subsections
str
Comma-separated list of subsections to include in the returned documentation. See ModuleDocumentation.get_refdoc() for more info.
'all'
Returns:
Type Descriptionstr
String with reference documentation.
Source code inontopy/ontodoc_rst.py
def get_refdoc(self, header: bool = True, subsections: str = \"all\") -> str:\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n header: Whether to also include the header in the returned\n documentation.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n\n Returns:\n String with reference documentation.\n \"\"\"\n moduledocs = []\n if header:\n moduledocs.append(self.get_header())\n moduledocs.extend(\n md.get_refdoc(subsections=subsections)\n for md in self.module_documentations\n if md.nonempty()\n )\n return \"\\n\".join(moduledocs)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.top_ontology","title":"top_ontology(self)
","text":"Return the top-level ontology.
Source code inontopy/ontodoc_rst.py
def top_ontology(self) -> Ontology:\n \"\"\"Return the top-level ontology.\"\"\"\n return self.module_documentations[0].ontology\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.write_conf_template","title":"write_conf_template(self, conffile='conf.py', docfile=None, overwrite=False)
","text":"Write basic template sphinx conf.py file to disk.
Parameters:
Name Type Description Defaultconffile
Name of configuration file to write.
'conf.py'
docfile
Name of generated documentation file. If not given, the name of the top ontology will be used.
None
overwrite
Whether to overwrite an existing file.
False
Source code in ontopy/ontodoc_rst.py
def write_conf_template(\n self, conffile=\"conf.py\", docfile=None, overwrite=False\n ):\n \"\"\"Write basic template sphinx conf.py file to disk.\n\n Arguments:\n conffile: Name of configuration file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n # pylint: disable=redefined-builtin\n md = self.module_documentations[0]\n\n iri = md.ontology.base_iri.rstrip(\"#/\")\n authors = sorted(md.graph.objects(URIRef(iri), DCTERMS.creator))\n license = md.graph.value(URIRef(iri), DCTERMS.license, default=None)\n release = md.graph.value(URIRef(iri), OWL.versionInfo, default=\"1.0\")\n\n author = \", \".join(a.value for a in authors) if authors else \"<AUTHOR>\"\n copyright = license if license else f\"{time.strftime('%Y')}, {author}\"\n\n content = f\"\"\"\n# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Project information -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\n\nproject = '{md.ontology.name}'\ncopyright = '{copyright}'\nauthor = '{author}'\nrelease = '{release}'\n\n# -- General configuration ---------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n\nextensions = []\n\ntemplates_path = ['_templates']\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n\n\n# -- Options for HTML output -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n\nhtml_theme = 'alabaster'\nhtml_static_path = ['_static']\n\"\"\"\n if not conffile:\n conffile = Path(docfile).with_name(\"conf.py\")\n if overwrite and conffile.exists():\n warnings.warn(f\"conf.py file already exists: {conffile}\")\n return\n\n conffile.write_text(content, encoding=\"utf8\")\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.write_index_template","title":"write_index_template(self, indexfile='index.rst', docfile=None, overwrite=False)
","text":"Write a basic template index.rst file to disk.
Parameters:
Name Type Description Defaultindexfile
Name of index file to write.
'index.rst'
docfile
Name of generated documentation file. If not given, the name of the top ontology will be used.
None
overwrite
Whether to overwrite an existing file.
False
Source code in ontopy/ontodoc_rst.py
def write_index_template(\n self, indexfile=\"index.rst\", docfile=None, overwrite=False\n ):\n \"\"\"Write a basic template index.rst file to disk.\n\n Arguments:\n indexfile: Name of index file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n docname = Path(docfile).stem if docfile else self.top_ontology().name\n content = f\"\"\"\n.. toctree::\n :includehidden:\n :hidden:\n\n Reference Index <{docname}>\n\n\"\"\"\n outpath = Path(indexfile)\n if not overwrite and outpath.exists():\n warnings.warn(f\"index.rst file already exists: {outpath}\")\n return\n\n outpath.write_text(content, encoding=\"utf8\")\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.write_refdoc","title":"write_refdoc(self, docfile=None, subsections='all')
","text":"Write reference documentation to disk.
Parameters:
Name Type Description Defaultdocfile
Name of file to write to. Defaults to the name of the top ontology with extension .rst
.
None
subsections
Comma-separated list of subsections to include in the returned documentation. See ModuleDocumentation.get_refdoc() for more info.
'all'
Source code in ontopy/ontodoc_rst.py
def write_refdoc(self, docfile=None, subsections=\"all\"):\n \"\"\"Write reference documentation to disk.\n\n Arguments:\n docfile: Name of file to write to. Defaults to the name of\n the top ontology with extension `.rst`.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n \"\"\"\n if not docfile:\n docfile = self.top_ontology().name + \".rst\"\n Path(docfile).write_text(\n self.get_refdoc(subsections=subsections), encoding=\"utf8\"\n )\n
"},{"location":"api_reference/ontopy/ontology/","title":"ontology","text":"A module adding additional functionality to owlready2.
If desirable some of these additions may be moved back into owlready2.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.BlankNode","title":" BlankNode
","text":"Represents a blank node.
A blank node is a node that is not a literal and has no IRI. Resources represented by blank nodes are also called anonumous resources. Only the subject or object in an RDF triple can be a blank node.
Source code inontopy/ontology.py
class BlankNode:\n \"\"\"Represents a blank node.\n\n A blank node is a node that is not a literal and has no IRI.\n Resources represented by blank nodes are also called anonumous resources.\n Only the subject or object in an RDF triple can be a blank node.\n \"\"\"\n\n def __init__(self, onto: Union[World, Ontology], storid: int):\n \"\"\"Initiate a blank node.\n\n Args:\n onto: Ontology or World instance.\n storid: The storage id of the blank node.\n \"\"\"\n if storid >= 0:\n raise ValueError(\n f\"A BlankNode is supposed to have a negative storid: {storid}\"\n )\n self.onto = onto\n self.storid = storid\n\n def __repr__(self):\n return repr(f\"_:b{-self.storid}\")\n\n def __hash__(self):\n return hash((self.onto, self.storid))\n\n def __eq__(self, other):\n \"\"\"For now blank nodes always compare true against each other.\"\"\"\n return isinstance(other, BlankNode)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.BlankNode.__init__","title":"__init__(self, onto, storid)
special
","text":"Initiate a blank node.
Parameters:
Name Type Description Defaultonto
Union[ontopy.ontology.World, ontopy.ontology.Ontology]
Ontology or World instance.
requiredstorid
int
The storage id of the blank node.
required Source code inontopy/ontology.py
def __init__(self, onto: Union[World, Ontology], storid: int):\n \"\"\"Initiate a blank node.\n\n Args:\n onto: Ontology or World instance.\n storid: The storage id of the blank node.\n \"\"\"\n if storid >= 0:\n raise ValueError(\n f\"A BlankNode is supposed to have a negative storid: {storid}\"\n )\n self.onto = onto\n self.storid = storid\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology","title":" Ontology (Ontology)
","text":"A generic class extending owlready2.Ontology.
Additional attributes: !!! iri \"IRI of this ontology. Currently only used for serialisation\" with rdflib. Defaults to None, meaning base_iri
will be used instead. !!! label_annotations \"List of label annotations, i.e. annotations\" that are recognised by the get_by_label() method. Defaults to [skos:prefLabel, rdf:label, skos:altLabel]
. prefix: Prefix for this ontology. Defaults to None.
ontopy/ontology.py
class Ontology(owlready2.Ontology): # pylint: disable=too-many-public-methods\n \"\"\"A generic class extending owlready2.Ontology.\n\n Additional attributes:\n iri: IRI of this ontology. Currently only used for serialisation\n with rdflib. Defaults to None, meaning `base_iri` will be used\n instead.\n label_annotations: List of label annotations, i.e. annotations\n that are recognised by the get_by_label() method. Defaults\n to `[skos:prefLabel, rdf:label, skos:altLabel]`.\n prefix: Prefix for this ontology. Defaults to None.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.iri = None\n self.label_annotations = DEFAULT_LABEL_ANNOTATIONS[:]\n self.prefix = None\n\n # Name of special unlabeled entities, like Thing, Nothing, etc...\n _special_labels = None\n\n # Some properties for customising dir() listing - useful in\n # interactive sessions...\n _dir_preflabel = isinteractive()\n _dir_label = isinteractive()\n _dir_name = False\n _dir_imported = isinteractive()\n dir_preflabel = property(\n fget=lambda self: self._dir_preflabel,\n fset=lambda self, v: setattr(self, \"_dir_preflabel\", bool(v)),\n doc=\"Whether to include entity prefLabel in dir() listing.\",\n )\n dir_label = property(\n fget=lambda self: self._dir_label,\n fset=lambda self, v: setattr(self, \"_dir_label\", bool(v)),\n doc=\"Whether to include entity label in dir() listing.\",\n )\n dir_name = property(\n fget=lambda self: self._dir_name,\n fset=lambda self, v: setattr(self, \"_dir_name\", bool(v)),\n doc=\"Whether to include entity name in dir() listing.\",\n )\n dir_imported = property(\n fget=lambda self: self._dir_imported,\n fset=lambda self, v: setattr(self, \"_dir_imported\", bool(v)),\n doc=\"Whether to include imported ontologies in dir() listing.\",\n )\n\n # Other settings\n _colon_in_label = False\n colon_in_label = property(\n fget=lambda self: self._colon_in_label,\n fset=lambda self, v: setattr(self, \"_colon_in_label\", bool(v)),\n doc=\"Whether to accept colon in name-part of IRI. \"\n \"If true, the name cannot be prefixed.\",\n )\n\n def __dir__(self):\n dirset = set(super().__dir__())\n lst = list(self.get_entities(imported=self._dir_imported))\n if self._dir_preflabel:\n dirset.update(\n str(dir.prefLabel.first())\n for dir in lst\n if hasattr(dir, \"prefLabel\")\n )\n if self._dir_label:\n dirset.update(\n str(dir.label.first()) for dir in lst if hasattr(dir, \"label\")\n )\n if self._dir_name:\n dirset.update(dir.name for dir in lst if hasattr(dir, \"name\"))\n dirset.difference_update({None}) # get rid of possible None\n return sorted(dirset)\n\n def __getitem__(self, name):\n item = super().__getitem__(name)\n if not item:\n item = self.get_by_label(name)\n return item\n\n def __getattr__(self, name):\n attr = super().__getattr__(name)\n if not attr:\n attr = self.get_by_label(name)\n return attr\n\n def __contains__(self, other):\n if self.world[other]:\n return True\n try:\n self.get_by_label(other)\n except NoSuchLabelError:\n return False\n return True\n\n def __objclass__(self):\n # Play nice with inspect...\n pass\n\n def __hash__(self):\n \"\"\"Returns a hash based on base_iri.\n This is done to keep Ontology hashable when defining __eq__.\n \"\"\"\n return hash(self.base_iri)\n\n def __eq__(self, other):\n \"\"\"Checks if this ontology is equal to `other`.\n\n This function compares the result of\n ``set(self.get_unabbreviated_triples(label='_:b'))``,\n i.e. blank nodes are not distinguished, but relations to blank\n nodes are included.\n \"\"\"\n return set(self.get_unabbreviated_triples(blank=\"_:b\")) == set(\n other.get_unabbreviated_triples(blank=\"_:b\")\n )\n\n def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n ):\n \"\"\"Returns all matching triples unabbreviated.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n # pylint: disable=invalid-name\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n\n def set_default_label_annotations(self):\n \"\"\"Sets the default label annotations.\"\"\"\n warnings.warn(\n \"Ontology.set_default_label_annotations() is deprecated. \"\n \"Default label annotations are set by Ontology.__init__(). \",\n DeprecationWarning,\n stacklevel=2,\n )\n self.label_annotations = DEFAULT_LABEL_ANNOTATIONS[:]\n\n def get_by_label(\n self,\n label: str,\n label_annotations: str = None,\n prefix: str = None,\n imported: bool = True,\n colon_in_label: bool = None,\n ):\n \"\"\"Returns entity with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'.\n get_by_label('prefix:label') ==\n get_by_label('label', prefix='prefix').\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n imported: Whether to also look for `label` in imported ontologies.\n colon_in_label: Whether to accept colon (:) in a label or name-part\n of IRI. Defaults to the `colon_in_label` property of `self`.\n Setting this true cannot be combined with `prefix`.\n\n If several entities have the same label, only the one which is\n found first is returned.Use get_by_label_all() to get all matches.\n\n Note, if different prefixes are provided in the label and via\n the `prefix` argument a warning will be issued and the\n `prefix` argument will take precedence.\n\n A NoSuchLabelError is raised if `label` cannot be found.\n \"\"\"\n # pylint: disable=too-many-arguments,too-many-branches,invalid-name\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, must be a string: '{label}'\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n if colon_in_label is None:\n colon_in_label = self._colon_in_label\n if colon_in_label:\n if prefix:\n raise ValueError(\n \"`prefix` cannot be combined with `colon_in_label`\"\n )\n else:\n splitlabel = label.split(\":\", 1)\n if len(splitlabel) == 2 and not splitlabel[1].startswith(\"//\"):\n label = splitlabel[1]\n if prefix and prefix != splitlabel[0]:\n warnings.warn(\n f\"Prefix given both as argument ({prefix}) \"\n f\"and in label ({splitlabel[0]}). \"\n \"Prefix given in argument takes precedence. \"\n )\n if not prefix:\n prefix = splitlabel[0]\n\n if prefix:\n entityset = self.get_by_label_all(\n label,\n label_annotations=label_annotations,\n prefix=prefix,\n )\n if len(entityset) == 1:\n return entityset.pop()\n if len(entityset) > 1:\n raise AmbiguousLabelError(\n f\"Several entities have the same label '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n raise NoSuchLabelError(\n f\"No label annotations matches for '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n\n # Label is a full IRI\n entity = self.world[label]\n if entity:\n return entity\n\n get_triples = (\n self.world._get_data_triples_spod_spod\n if imported\n else self._get_data_triples_spod_spod\n )\n\n for storid in self._to_storids(label_annotations):\n for s, _, _, _ in get_triples(None, storid, label, None):\n return self.world[self._unabbreviate(s)]\n\n # Special labels\n if self._special_labels and label in self._special_labels:\n return self._special_labels[label]\n\n # Check if label is a name under base_iri\n entity = self.world[self.base_iri + label]\n if entity:\n return entity\n\n # Check label is the name of an entity\n for entity in self.get_entities(imported=imported):\n if label == entity.name:\n return entity\n\n raise NoSuchLabelError(f\"No label annotations matches '{label}'\")\n\n def get_by_label_all(\n self,\n label,\n label_annotations=None,\n prefix=None,\n exact_match=False,\n ) -> \"Set[Optional[owlready2.entity.EntityClass]]\":\n \"\"\"Returns set of entities with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'. Wildcard matching\n using glob pattern is also supported if `exact_match` is set to\n false.\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n exact_match: Do not treat \"*\" and brackets as special characters\n when matching. May be useful if your ontology has labels\n containing such labels.\n\n Returns:\n Set of all matching entities or an empty set if no matches\n could be found.\n \"\"\"\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, \" f\"must be a string: {label!r}\"\n )\n if \" \" in label:\n raise ValueError(\n f\"Invalid label definition, {label!r} contains spaces.\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n entities = set()\n\n # Check label annotations\n if exact_match:\n for storid in self._to_storids(label_annotations):\n entities.update(\n self.world._get_by_storid(s)\n for s, _, _ in self.world._get_data_triples_spod_spod(\n None, storid, str(label), None\n )\n )\n else:\n for storid in self._to_storids(label_annotations):\n label_entity = self._unabbreviate(storid)\n key = (\n label_entity.name\n if hasattr(label_entity, \"name\")\n else label_entity\n )\n entities.update(self.world.search(**{key: label}))\n\n if self._special_labels and label in self._special_labels:\n entities.update(self._special_labels[label])\n\n # Check name-part of IRI\n if exact_match:\n entities.update(\n ent for ent in self.get_entities() if ent.name == str(label)\n )\n else:\n matches = fnmatch.filter(\n (ent.name for ent in self.get_entities()), label\n )\n entities.update(\n ent for ent in self.get_entities() if ent.name in matches\n )\n\n if prefix:\n return set(\n ent\n for ent in entities\n if ent.namespace.ontology.prefix == prefix\n )\n return entities\n\n def _to_storids(self, sequence, create_if_missing=False):\n \"\"\"Return a list of storid's corresponding to the elements in the\n sequence `sequence`.\n\n The elements may be either be full IRIs (strings) or Owlready2\n entities with an associated storid.\n\n If `create_if_missing` is true, new Owlready2 entities will be\n created for IRIs that not already are associated with an\n entity. Otherwise such IRIs will be skipped in the returned\n list.\n \"\"\"\n if not sequence:\n return []\n storids = []\n for element in sequence:\n if hasattr(element, \"storid\"):\n storids.append(element.storid)\n else:\n storid = self.world._abbreviate(element, create_if_missing)\n if storid:\n storids.append(storid)\n return storids\n\n def add_label_annotation(self, iri):\n \"\"\"Adds label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.add_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n if iri not in self.label_annotations:\n self.label_annotations.append(iri)\n\n def remove_label_annotation(self, iri):\n \"\"\"Removes label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.remove_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n try:\n self.label_annotations.remove(iri)\n except ValueError:\n pass\n\n def set_common_prefix(\n self,\n iri_base: str = \"http://emmo.info/emmo\",\n prefix: str = \"emmo\",\n visited: \"Optional[Set]\" = None,\n ) -> None:\n \"\"\"Set a common prefix for all imported ontologies\n with the same first part of the base_iri.\n\n Args:\n iri_base: The start of the base_iri to look for. Defaults to\n the emmo base_iri http://emmo.info/emmo\n prefix: the desired prefix. Defaults to emmo.\n visited: Ontologies to skip. Only intended for internal use.\n \"\"\"\n if visited is None:\n visited = set()\n if self.base_iri.startswith(iri_base):\n self.prefix = prefix\n for onto in self.imported_ontologies:\n if not onto in visited:\n visited.add(onto)\n onto.set_common_prefix(\n iri_base=iri_base, prefix=prefix, visited=visited\n )\n\n def load( # pylint: disable=too-many-arguments,arguments-renamed\n self,\n only_local=False,\n filename=None,\n format=None, # pylint: disable=redefined-builtin\n reload=None,\n reload_if_newer=False,\n url_from_catalog=None,\n catalog_file=\"catalog-v001.xml\",\n emmo_based=True,\n prefix=None,\n prefix_emmo=None,\n **kwargs,\n ):\n \"\"\"Load the ontology.\n\n Arguments\n ---------\n only_local: bool\n Whether to only read local files. This requires that you\n have appended the path to the ontology to owlready2.onto_path.\n filename: str\n Path to file to load the ontology from. Defaults to `base_iri`\n provided to get_ontology().\n format: str\n Format of `filename`. Default is inferred from `filename`\n extension.\n reload: bool\n Whether to reload the ontology if it is already loaded.\n reload_if_newer: bool\n Whether to reload the ontology if the source has changed since\n last time it was loaded.\n url_from_catalog: bool | None\n Whether to use catalog file to resolve the location of `base_iri`.\n If None, the catalog file is used if it exists in the same\n directory as `filename`.\n catalog_file: str\n Name of Prot\u00e8g\u00e8 catalog file in the same folder as the\n ontology. This option is used together with `only_local` and\n defaults to \"catalog-v001.xml\".\n emmo_based: bool\n Whether this is an EMMO-based ontology or not, default `True`.\n prefix: defaults to self.get_namespace.name if\n prefix_emmo: bool, default None. If emmo_based is True it\n defaults to True and sets the prefix of all imported ontologies\n with base_iri starting with 'http://emmo.info/emmo' to emmo\n kwargs:\n Additional keyword arguments are passed on to\n owlready2.Ontology.load().\n \"\"\"\n # TODO: make sure that `only_local` argument is respected...\n\n if self.loaded:\n return self\n self._load(\n only_local=only_local,\n filename=filename,\n format=format,\n reload=reload,\n reload_if_newer=reload_if_newer,\n url_from_catalog=url_from_catalog,\n catalog_file=catalog_file,\n **kwargs,\n )\n\n # Enable optimised search by get_by_label()\n if self._special_labels is None and emmo_based:\n top = self.world[\"http://www.w3.org/2002/07/owl#topObjectProperty\"]\n self._special_labels = {\n \"Thing\": owlready2.Thing,\n \"Nothing\": owlready2.Nothing,\n \"topObjectProperty\": top,\n \"owl:Thing\": owlready2.Thing,\n \"owl:Nothing\": owlready2.Nothing,\n \"owl:topObjectProperty\": top,\n }\n # set prefix if another prefix is desired\n # if we do this, shouldn't we make the name of all\n # entities of the given ontology to the same?\n if prefix:\n self.prefix = prefix\n else:\n self.prefix = self.name\n\n if emmo_based and prefix_emmo is None:\n prefix_emmo = True\n if prefix_emmo:\n self.set_common_prefix()\n\n return self\n\n def _load( # pylint: disable=too-many-arguments,too-many-locals,too-many-branches,too-many-statements\n self,\n only_local=False,\n filename=None,\n format=None, # pylint: disable=redefined-builtin\n reload=None,\n reload_if_newer=False,\n url_from_catalog=None,\n catalog_file=\"catalog-v001.xml\",\n **kwargs,\n ):\n \"\"\"Help function for load().\"\"\"\n web_protocol = \"http://\", \"https://\", \"ftp://\"\n url = str(filename) if filename else self.base_iri.rstrip(\"/#\")\n if url.startswith(web_protocol):\n baseurl = os.path.dirname(url)\n catalogurl = baseurl + \"/\" + catalog_file\n else:\n if url.startswith(\"file://\"):\n url = url[7:]\n url = os.path.normpath(os.path.abspath(url))\n baseurl = os.path.dirname(url)\n catalogurl = os.path.join(baseurl, catalog_file)\n\n def getmtime(path):\n if os.path.exists(path):\n return os.path.getmtime(path)\n return 0.0\n\n # Resolve url from catalog file\n iris = {}\n dirs = set()\n if url_from_catalog or url_from_catalog is None:\n not_reload = not reload and (\n not reload_if_newer\n or getmtime(catalogurl)\n > self.world._cached_catalogs[catalogurl][0]\n )\n # get iris from catalog already in cached catalogs\n if catalogurl in self.world._cached_catalogs and not_reload:\n _, iris, dirs = self.world._cached_catalogs[catalogurl]\n # do not update cached_catalogs if url already in _iri_mappings\n # and reload not forced\n elif url in self.world._iri_mappings and not_reload:\n pass\n # update iris from current catalogurl\n else:\n try:\n iris, dirs = read_catalog(\n uri=catalogurl,\n recursive=False,\n return_paths=True,\n catalog_file=catalog_file,\n )\n except ReadCatalogError:\n if url_from_catalog is not None:\n raise\n self.world._cached_catalogs[catalogurl] = (0.0, {}, set())\n else:\n self.world._cached_catalogs[catalogurl] = (\n getmtime(catalogurl),\n iris,\n dirs,\n )\n self.world._iri_mappings.update(iris)\n resolved_url = self.world._iri_mappings.get(url, url)\n # Append paths from catalog file to onto_path\n for path in sorted(dirs, reverse=True):\n if path not in owlready2.onto_path:\n owlready2.onto_path.append(path)\n\n # Use catalog file to update IRIs of imported ontologies\n # in internal store and try to load again...\n if self.world._iri_mappings:\n for abbrev_iri in self.world._get_obj_triples_sp_o(\n self.storid, owlready2.owl_imports\n ):\n iri = self._unabbreviate(abbrev_iri)\n if iri in self.world._iri_mappings:\n self._del_obj_triple_spo(\n self.storid, owlready2.owl_imports, abbrev_iri\n )\n self._add_obj_triple_spo(\n self.storid,\n owlready2.owl_imports,\n self._abbreviate(self.world._iri_mappings[iri]),\n )\n\n # Load ontology\n try:\n self.loaded = False\n fmt = format if format else guess_format(resolved_url, fmap=FMAP)\n if fmt and fmt not in OWLREADY2_FORMATS:\n # Convert filename to rdfxml before passing it to owlready2\n graph = rdflib.Graph()\n try:\n graph.parse(resolved_url, format=fmt)\n except URLError as err:\n raise EMMOntoPyException(\n \"URL error\", err, resolved_url\n ) from err\n\n with tempfile.NamedTemporaryFile() as handle:\n graph.serialize(destination=handle, format=\"xml\")\n handle.seek(0)\n return super().load(\n only_local=True,\n fileobj=handle,\n reload=reload,\n reload_if_newer=reload_if_newer,\n format=\"rdfxml\",\n **kwargs,\n )\n elif resolved_url.startswith(web_protocol):\n return super().load(\n only_local=only_local,\n reload=reload,\n reload_if_newer=reload_if_newer,\n **kwargs,\n )\n\n else:\n with open(resolved_url, \"rb\") as handle:\n return super().load(\n only_local=only_local,\n fileobj=handle,\n reload=reload,\n reload_if_newer=reload_if_newer,\n **kwargs,\n )\n except owlready2.OwlReadyOntologyParsingError:\n # Owlready2 is not able to parse the ontology - most\n # likely because imported ontologies must be resolved\n # using the catalog file.\n\n # Reraise if we don't want to read from the catalog file\n if not url_from_catalog and url_from_catalog is not None:\n raise\n\n warnings.warn(\n \"Recovering from Owlready2 parsing error... might be deprecated\"\n )\n\n # Copy the ontology into a local folder and try again\n with tempfile.TemporaryDirectory() as handle:\n output = os.path.join(handle, os.path.basename(resolved_url))\n convert_imported(\n input_ontology=resolved_url,\n output_ontology=output,\n input_format=fmt,\n output_format=\"xml\",\n url_from_catalog=url_from_catalog,\n catalog_file=catalog_file,\n )\n\n self.loaded = False\n with open(output, \"rb\") as handle:\n try:\n return super().load(\n only_local=True,\n fileobj=handle,\n reload=reload,\n reload_if_newer=reload_if_newer,\n format=\"rdfxml\",\n **kwargs,\n )\n except HTTPError as exc: # Add url to HTTPError message\n raise HTTPError(\n url=exc.url,\n code=exc.code,\n msg=f\"{exc.url}: {exc.msg}\",\n hdrs=exc.hdrs,\n fp=exc.fp,\n ).with_traceback(exc.__traceback__)\n\n except HTTPError as exc: # Add url to HTTPError message\n raise HTTPError(\n url=exc.url,\n code=exc.code,\n msg=f\"{exc.url}: {exc.msg}\",\n hdrs=exc.hdrs,\n fp=exc.fp,\n ).with_traceback(exc.__traceback__)\n\n def save(\n self,\n filename=None,\n format=None,\n dir=\".\",\n mkdir=False,\n overwrite=False,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n append_catalog=False,\n catalog_file=\"catalog-v001.xml\",\n **kwargs,\n ) -> Path:\n \"\"\"Writes the ontology to file.\n\n Parameters\n ----------\n filename: None | str | Path\n Name of file to write to. If None, it defaults to the name\n of the ontology with `format` as file extension.\n format: str\n Output format. The default is to infer it from `filename`.\n dir: str | Path\n If `filename` is a relative path, it is a relative path to `dir`.\n mkdir: bool\n Whether to create output directory if it does not exists.\n owerwrite: bool\n If true and `filename` exists, remove the existing file before\n saving. The default is to append to an existing ontology.\n recursive: bool\n Whether to save imported ontologies recursively. This is\n commonly combined with `filename=None`, `dir` and `mkdir`.\n Note that depending on the structure of the ontology and\n all imports the ontology might end up in a subdirectory.\n If filename is given, the ontology is saved to the given\n directory.\n The path to the final location is returned.\n squash: bool\n If true, rdflib will be used to save the current ontology\n together with all its sub-ontologies into `filename`.\n It makes no sense to combine this with `recursive`.\n write_catalog_file: bool\n Whether to also write a catalog file to disk.\n append_catalog: bool\n Whether to append to an existing catalog file.\n catalog_file: str | Path\n Name of catalog file. If not an absolute path, it is prepended\n to `dir`.\n\n Returns\n --------\n The path to the saved ontology.\n \"\"\"\n # pylint: disable=redefined-builtin,too-many-arguments\n # pylint: disable=too-many-statements,too-many-branches\n # pylint: disable=too-many-locals,arguments-renamed,invalid-name\n\n if not _validate_installed_version(\n package=\"rdflib\", min_version=\"6.0.0\"\n ) and format == FMAP.get(\"ttl\", \"\"):\n from rdflib import ( # pylint: disable=import-outside-toplevel\n __version__ as __rdflib_version__,\n )\n\n warnings.warn(\n IncompatibleVersion(\n \"To correctly convert to Turtle format, rdflib must be \"\n \"version 6.0.0 or greater, however, the detected rdflib \"\n \"version used by your Python interpreter is \"\n f\"{__rdflib_version__!r}. For more information see the \"\n \"'Known issues' section of the README.\"\n )\n )\n revmap = {value: key for key, value in FMAP.items()}\n if filename is None:\n if format:\n fmt = revmap.get(format, format)\n file = f\"{self.name}.{fmt}\"\n else:\n raise TypeError(\"`filename` and `format` cannot both be None.\")\n else:\n file = filename\n filepath = os.path.join(\n dir, file if isinstance(file, (str, Path)) else file.name\n )\n returnpath = filepath\n\n dir = Path(filepath).resolve().parent\n\n if mkdir:\n outdir = Path(filepath).parent.resolve()\n if not outdir.exists():\n outdir.mkdir(parents=True)\n\n if not format:\n format = guess_format(file, fmap=FMAP)\n fmt = revmap.get(format, format)\n\n if overwrite and os.path.exists(filepath):\n os.remove(filepath)\n\n if recursive:\n if squash:\n raise ValueError(\n \"`recursive` and `squash` should not both be true\"\n )\n layout = directory_layout(self)\n if filename:\n layout[self] = file.rstrip(f\".{fmt}\")\n # Update path to where the ontology is saved\n # Note that filename should include format\n # when given\n returnpath = Path(dir) / f\"{layout[self]}.{fmt}\"\n for onto, path in layout.items():\n fname = Path(dir) / f\"{path}.{fmt}\"\n onto.save(\n filename=fname,\n format=format,\n dir=dir,\n mkdir=mkdir,\n overwrite=overwrite,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n **kwargs,\n )\n\n if write_catalog_file:\n catalog_files = set()\n irimap = {}\n for onto, path in layout.items():\n irimap[onto.get_version(as_iri=True)] = (\n f\"{dir}/{path}.{fmt}\"\n )\n catalog_files.add(Path(path).parent / catalog_file)\n\n for catfile in catalog_files:\n write_catalog(\n irimap.copy(),\n output=catfile,\n directory=dir,\n append=append_catalog,\n )\n elif squash:\n URIRef, RDF, OWL = rdflib.URIRef, rdflib.RDF, rdflib.OWL\n\n # Make a copy of the owlready2 graph object to not mess with\n # owlready2 internals\n graph = rdflib.Graph()\n for triple in self.world.as_rdflib_graph():\n graph.add(triple)\n\n # Add common namespaces unknown to rdflib\n extra_namespaces = [\n (\"\", self.base_iri),\n (\"swrl\", \"http://www.w3.org/2003/11/swrl#\"),\n (\"bibo\", \"http://purl.org/ontology/bibo/\"),\n ]\n for prefix, iri in extra_namespaces:\n graph.namespace_manager.bind(\n prefix, rdflib.Namespace(iri), override=False\n )\n\n # Remove all ontology-declarations in the graph that are\n # not the current ontology.\n for s, _, _ in graph.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n if str(s).rstrip(\"/#\") != self.base_iri.rstrip(\"/#\"):\n for (\n _,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (s, None, None)\n ):\n graph.remove((s, p, o))\n graph.remove((s, OWL.imports, None))\n\n # Insert correct IRI of the ontology\n if self.iri:\n base_iri = URIRef(self.base_iri)\n for s, p, o in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((URIRef(self.iri), p, o))\n\n graph.serialize(destination=filepath, format=format)\n elif format in OWLREADY2_FORMATS:\n super().save(file=filepath, format=fmt, **kwargs)\n else:\n # The try-finally clause is needed for cleanup and because\n # we have to provide delete=False to NamedTemporaryFile\n # since Windows does not allow to reopen an already open\n # file.\n try:\n with tempfile.NamedTemporaryFile(\n suffix=\".owl\", delete=False\n ) as handle:\n tmpfile = handle.name\n super().save(tmpfile, format=\"ntriples\", **kwargs)\n graph = rdflib.Graph()\n graph.parse(tmpfile, format=\"ntriples\")\n graph.namespace_manager.bind(\n \"\", rdflib.Namespace(self.base_iri)\n )\n if self.iri:\n base_iri = rdflib.URIRef(self.base_iri)\n for (\n s,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((rdflib.URIRef(self.iri), p, o))\n graph.serialize(destination=filepath, format=format)\n finally:\n os.remove(tmpfile)\n\n if write_catalog_file and not recursive:\n write_catalog(\n {self.get_version(as_iri=True): filepath},\n output=catalog_file,\n directory=dir,\n append=append_catalog,\n )\n return Path(returnpath)\n\n def copy(self):\n \"\"\"Return a copy of the ontology.\"\"\"\n with tempfile.TemporaryDirectory() as dirname:\n filename = self.save(\n dir=dirname,\n format=\"turtle\",\n recursive=True,\n write_catalog_file=True,\n mkdir=True,\n )\n ontology = get_ontology(filename).load()\n ontology.name = self.name\n return ontology\n\n def get_imported_ontologies(self, recursive=False):\n \"\"\"Return a list with imported ontologies.\n\n If `recursive` is `True`, ontologies imported by imported ontologies\n are also returned.\n \"\"\"\n\n def rec_imported(onto):\n for ontology in onto.imported_ontologies:\n # pylint: disable=possibly-used-before-assignment\n if ontology not in imported:\n imported.add(ontology)\n rec_imported(ontology)\n\n if recursive:\n imported = set()\n rec_imported(self)\n return list(imported)\n\n return self.imported_ontologies\n\n def get_entities( # pylint: disable=too-many-arguments\n self,\n imported=True,\n classes=True,\n individuals=True,\n object_properties=True,\n data_properties=True,\n annotation_properties=True,\n ):\n \"\"\"Return a generator over (optionally) all classes, individuals,\n object_properties, data_properties and annotation_properties.\n\n If `imported` is `True`, entities in imported ontologies will also\n be included.\n \"\"\"\n generator = []\n if classes:\n generator.append(self.classes(imported))\n if individuals:\n generator.append(self.individuals(imported))\n if object_properties:\n generator.append(self.object_properties(imported))\n if data_properties:\n generator.append(self.data_properties(imported))\n if annotation_properties:\n generator.append(self.annotation_properties(imported))\n yield from itertools.chain(*generator)\n\n def classes(self, imported=False):\n \"\"\"Returns an generator over all classes.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"classes\", imported=imported)\n\n def _entities(\n self, entity_type, imported=False\n ): # pylint: disable=too-many-branches\n \"\"\"Returns an generator over all entities of the desired type.\n This is a helper function for `classes()`, `individuals()`,\n `object_properties()`, `data_properties()` and\n `annotation_properties()`.\n\n Arguments:\n entity_type: The type of entity desired given as a string.\n Can be any of `classes`, `individuals`,\n `object_properties`, `data_properties` and\n `annotation_properties`.\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n\n generator = []\n if imported:\n ontologies = self.get_imported_ontologies(recursive=True)\n ontologies.append(self)\n for onto in ontologies:\n if entity_type == \"classes\":\n for cls in list(onto.classes()):\n generator.append(cls)\n elif entity_type == \"individuals\":\n for ind in list(onto.individuals()):\n generator.append(ind)\n elif entity_type == \"object_properties\":\n for prop in list(onto.object_properties()):\n generator.append(prop)\n elif entity_type == \"data_properties\":\n for prop in list(onto.data_properties()):\n generator.append(prop)\n elif entity_type == \"annotation_properties\":\n for prop in list(onto.annotation_properties()):\n generator.append(prop)\n else:\n if entity_type == \"classes\":\n generator = super().classes()\n elif entity_type == \"individuals\":\n generator = super().individuals()\n elif entity_type == \"object_properties\":\n generator = super().object_properties()\n elif entity_type == \"data_properties\":\n generator = super().data_properties()\n elif entity_type == \"annotation_properties\":\n generator = super().annotation_properties()\n\n yield from generator\n\n def individuals(self, imported=False):\n \"\"\"Returns an generator over all individuals.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"individuals\", imported=imported)\n\n def object_properties(self, imported=False):\n \"\"\"Returns an generator over all object_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"object_properties\", imported=imported)\n\n def data_properties(self, imported=False):\n \"\"\"Returns an generator over all data_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"data_properties\", imported=imported)\n\n def annotation_properties(self, imported=False):\n \"\"\"Returns an generator over all annotation_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n\n \"\"\"\n return self._entities(\"annotation_properties\", imported=imported)\n\n def get_root_classes(self, imported=False):\n \"\"\"Returns a list or root classes.\"\"\"\n return [\n cls\n for cls in self.classes(imported=imported)\n if not cls.ancestors().difference(set([cls, owlready2.Thing]))\n ]\n\n def get_root_object_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.object_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n\n def get_root_data_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.data_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n\n def get_roots(self, imported=False):\n \"\"\"Returns all class, object_property and data_property roots.\"\"\"\n roots = self.get_root_classes(imported=imported)\n roots.extend(self.get_root_object_properties(imported=imported))\n roots.extend(self.get_root_data_properties(imported=imported))\n return roots\n\n def sync_python_names(self, annotations=(\"prefLabel\", \"label\", \"altLabel\")):\n \"\"\"Update the `python_name` attribute of all properties.\n\n The python_name attribute will be set to the first non-empty\n annotation in the sequence of annotations in `annotations` for\n the property.\n \"\"\"\n\n def update(gen):\n for prop in gen:\n for annotation in annotations:\n if hasattr(prop, annotation) and getattr(prop, annotation):\n prop.python_name = getattr(prop, annotation).first()\n break\n\n update(\n self.get_entities(\n classes=False,\n individuals=False,\n object_properties=False,\n data_properties=False,\n )\n )\n update(\n self.get_entities(\n classes=False, individuals=False, annotation_properties=False\n )\n )\n\n def rename_entities(\n self,\n annotations=(\"prefLabel\", \"label\", \"altLabel\"),\n ):\n \"\"\"Set `name` of all entities to the first non-empty annotation in\n `annotations`.\n\n Warning, this method changes all IRIs in the ontology. However,\n it may be useful to make the ontology more readable and to work\n with it together with a triple store.\n \"\"\"\n for entity in self.get_entities():\n for annotation in annotations:\n if hasattr(entity, annotation):\n name = getattr(entity, annotation).first()\n if name:\n entity.name = name\n break\n\n def sync_reasoner(\n self, reasoner=\"HermiT\", include_imported=False, **kwargs\n ):\n \"\"\"Update current ontology by running the given reasoner.\n\n Supported values for `reasoner` are 'HermiT' (default), Pellet\n and 'FaCT++'.\n\n If `include_imported` is true, the reasoner will also reason\n over imported ontologies. Note that this may be **very** slow.\n\n Keyword arguments are passed to the underlying owlready2 function.\n \"\"\"\n # pylint: disable=too-many-branches\n\n removed_equivalent = defaultdict(list)\n removed_subclasses = defaultdict(list)\n\n if reasoner == \"FaCT++\":\n sync = sync_reasoner_factpp\n elif reasoner == \"Pellet\":\n sync = owlready2.sync_reasoner_pellet\n elif reasoner == \"HermiT\":\n sync = owlready2.sync_reasoner_hermit\n\n # Remove custom data propertyes, otherwise HermiT will crash\n datatype_iri = \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n\n for cls in self.classes(imported=include_imported):\n remove_eq = []\n for i, r in enumerate(cls.equivalent_to):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_eq.append(i)\n removed_equivalent[cls].append(r)\n for i in reversed(remove_eq):\n del cls.equivalent_to[i]\n\n remove_subcls = []\n for i, r in enumerate(cls.is_a):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_subcls.append(i)\n removed_subclasses[cls].append(r)\n for i in reversed(remove_subcls):\n del cls.is_a[i]\n\n else:\n raise ValueError(\n f\"Unknown reasoner '{reasoner}'. Supported reasoners \"\n \"are 'Pellet', 'HermiT' and 'FaCT++'.\"\n )\n\n # For some reason we must visit all entities once before running\n # the reasoner...\n list(self.get_entities())\n\n with self:\n if include_imported:\n sync(self.world, **kwargs)\n else:\n sync(self, **kwargs)\n\n # Restore removed custom data properties\n for cls, eqs in removed_equivalent.items():\n cls.extend(eqs)\n for cls, subcls in removed_subclasses.items():\n cls.extend(subcls)\n\n def sync_attributes( # pylint: disable=too-many-branches\n self,\n name_policy=None,\n name_prefix=\"\",\n class_docstring=\"comment\",\n sync_imported=False,\n ):\n \"\"\"This method is intended to be called after you have added new\n classes (typically via Python) to make sure that attributes like\n `label` and `comments` are defined.\n\n If a class, object property, data property or annotation\n property in the current ontology has no label, the name of\n the corresponding Python class will be assigned as label.\n\n If a class, object property, data property or annotation\n property has no comment, it will be assigned the docstring of\n the corresponding Python class.\n\n `name_policy` specify wether and how the names in the ontology\n should be updated. Valid values are:\n None not changed\n \"uuid\" `name_prefix` followed by a global unique id (UUID).\n If the name is already valid accoridng to this standard\n it will not be regenerated.\n \"sequential\" `name_prefix` followed a sequantial number.\n EMMO conventions imply ``name_policy=='uuid'``.\n\n If `sync_imported` is true, all imported ontologies are also\n updated.\n\n The `class_docstring` argument specifies the annotation that\n class docstrings are mapped to. Defaults to \"comment\".\n \"\"\"\n for cls in itertools.chain(\n self.classes(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n ):\n if not hasattr(cls, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=unused-variable\n class prefLabel(owlready2.label):\n pass\n\n cls.prefLabel = [locstr(cls.__name__, lang=\"en\")]\n elif not cls.prefLabel:\n cls.prefLabel.append(locstr(cls.__name__, lang=\"en\"))\n if class_docstring and hasattr(cls, \"__doc__\") and cls.__doc__:\n getattr(cls, class_docstring).append(\n locstr(inspect.cleandoc(cls.__doc__), lang=\"en\")\n )\n\n for ind in self.individuals():\n if not hasattr(ind, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=function-redefined\n class prefLabel(owlready2.label):\n iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n\n ind.prefLabel = [locstr(ind.name, lang=\"en\")]\n elif not ind.prefLabel:\n ind.prefLabel.append(locstr(ind.name, lang=\"en\"))\n\n chain = itertools.chain(\n self.classes(),\n self.individuals(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n )\n if name_policy == \"uuid\":\n for obj in chain:\n try:\n # Passing the following means that the name is valid\n # and need not be regenerated.\n if not obj.name.startswith(name_prefix):\n raise ValueError\n uuid.UUID(obj.name.lstrip(name_prefix), version=5)\n except ValueError:\n obj.name = name_prefix + str(\n uuid.uuid5(uuid.NAMESPACE_DNS, obj.name)\n )\n elif name_policy == \"sequential\":\n for obj in chain:\n counter = 0\n while f\"{self.base_iri}{name_prefix}{counter}\" in self:\n counter += 1\n obj.name = f\"{name_prefix}{counter}\"\n elif name_policy is not None:\n raise TypeError(f\"invalid name_policy: {name_policy!r}\")\n\n if sync_imported:\n for onto in self.imported_ontologies:\n onto.sync_attributes()\n\n def get_relations(self):\n \"\"\"Returns a generator for all relations.\"\"\"\n warnings.warn(\n \"Ontology.get_relations() is deprecated. Use \"\n \"onto.object_properties() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return self.object_properties()\n\n def get_annotations(self, entity):\n \"\"\"Returns a dict with annotations for `entity`. Entity may be given\n either as a ThingClass object or as a label.\"\"\"\n warnings.warn(\n \"Ontology.get_annotations(entity) is deprecated. Use \"\n \"entity.get_annotations() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n res = {\"comment\": getattr(entity, \"comment\", \"\")}\n for annotation in self.annotation_properties():\n res[annotation.label.first()] = [\n obj.strip('\"')\n for _, _, obj in self.get_triples(\n entity.storid, annotation.storid, None\n )\n ]\n return res\n\n def get_branch( # pylint: disable=too-many-arguments\n self,\n root,\n leaves=(),\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n sort=False,\n ):\n \"\"\"Returns a set with all direct and indirect subclasses of `root`.\n Any subclass found in the sequence `leaves` will be included in\n the returned list, but its subclasses will not. The elements\n of `leaves` may be ThingClass objects or labels.\n\n Subclasses of any subclass found in the sequence `leaves` will\n be excluded from the returned list, where the elements of `leaves`\n may be ThingClass objects or labels.\n\n If `include_leaves` is true, the leaves are included in the returned\n list, otherwise they are not.\n\n If `strict_leaves` is true, any descendant of a leaf will be excluded\n in the returned set.\n\n If given, `exclude` may be a sequence of classes, including\n their subclasses, to exclude from the output.\n\n If `sort` is True, a list sorted according to depth and label\n will be returned instead of a set.\n \"\"\"\n\n def _branch(root, leaves):\n if root not in leaves:\n branch = {\n root,\n }\n for cls in root.subclasses():\n # Defining a branch is actually quite tricky. Consider\n # the case:\n #\n # L isA R\n # A isA L\n # A isA R\n #\n # where R is the root, L is a leaf and A is a direct\n # child of both. Logically, since A is a child of the\n # leaf we want to skip A. But a strait forward imple-\n # mentation will see that A is a child of the root and\n # include it. Requireing that the R should be a strict\n # parent of A solves this.\n if root in cls.get_parents(strict=True):\n branch.update(_branch(cls, leaves))\n else:\n branch = (\n {\n root,\n }\n if include_leaves\n else set()\n )\n return branch\n\n if isinstance(root, str):\n root = self.get_by_label(root)\n\n leaves = set(\n self.get_by_label(leaf) if isinstance(leaf, str) else leaf\n for leaf in leaves\n )\n leaves.discard(root)\n\n if exclude:\n exclude = set(\n self.get_by_label(e) if isinstance(e, str) else e\n for e in exclude\n )\n leaves.update(exclude)\n\n branch = _branch(root, leaves)\n\n # Exclude all descendants of any leaf\n if strict_leaves:\n descendants = root.descendants()\n for leaf in leaves:\n if leaf in descendants:\n branch.difference_update(\n leaf.descendants(include_self=False)\n )\n\n if exclude:\n branch.difference_update(exclude)\n\n # Sort according to depth, then by label\n if sort:\n branch = sorted(\n sorted(branch, key=asstring),\n key=lambda x: len(x.mro()),\n )\n\n return branch\n\n def is_individual(self, entity):\n \"\"\"Returns true if entity is an individual.\"\"\"\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return isinstance(entity, owlready2.Thing)\n\n # FIXME - deprecate this method as soon the ThingClass property\n # `defined_class` works correct in Owlready2\n def is_defined(self, entity):\n \"\"\"Returns true if the entity is a defined class.\n\n Deprecated, use the `is_defined` property of the classes\n (ThingClass subclasses) instead.\n \"\"\"\n warnings.warn(\n \"This method is deprecated. Use the `is_defined` property of \"\n \"the classes instad.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return hasattr(entity, \"equivalent_to\") and bool(entity.equivalent_to)\n\n def get_version(self, as_iri=False) -> str:\n \"\"\"Returns the version number of the ontology as inferred from the\n owl:versionIRI tag or, if owl:versionIRI is not found, from\n owl:versionINFO.\n\n If `as_iri` is True, the full versionIRI is returned.\n \"\"\"\n version_iri_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionIRI\"\n )\n tokens = self.get_triples(s=self.storid, p=version_iri_storid)\n if (not tokens) and (as_iri is True):\n raise TypeError(\n \"No owl:versionIRI \"\n f\"in Ontology {self.base_iri!r}. \"\n \"Search for owl:versionInfo with as_iri=False\"\n )\n if tokens:\n _, _, obj = tokens[0]\n version_iri = self.world._unabbreviate(obj)\n if as_iri:\n return version_iri\n return infer_version(self.base_iri, version_iri)\n\n version_info_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionInfo\"\n )\n tokens = self.get_triples(s=self.storid, p=version_info_storid)\n if not tokens:\n raise TypeError(\n \"No versionIRI or versionInfo \" f\"in Ontology {self.base_iri!r}\"\n )\n _, _, version_info = tokens[0]\n return version_info.split(\"^^\")[0].strip('\"')\n\n def set_version(self, version=None, version_iri=None):\n \"\"\"Assign version to ontology by asigning owl:versionIRI.\n\n If `version` but not `version_iri` is provided, the version\n IRI will be the combination of `base_iri` and `version`.\n \"\"\"\n _version_iri = \"http://www.w3.org/2002/07/owl#versionIRI\"\n version_iri_storid = self.world._abbreviate(_version_iri)\n if self._has_obj_triple_spo( # pylint: disable=unexpected-keyword-arg\n # For some reason _has_obj_triples_spo exists in both\n # owlready2.namespace.Namespace (with arguments subject/predicate)\n # and in owlready2.triplelite._GraphManager (with arguments s/p)\n # owlready2.Ontology inherits from Namespace directly\n # and pylint checks that.\n # It actually accesses the one in triplelite.\n # subject=self.storid, predicate=version_iri_storid\n s=self.storid,\n p=version_iri_storid,\n ):\n self._del_obj_triple_spo(s=self.storid, p=version_iri_storid)\n\n if not version_iri:\n if not version:\n raise TypeError(\n \"Either `version` or `version_iri` must be provided\"\n )\n head, tail = self.base_iri.rstrip(\"#/\").rsplit(\"/\", 1)\n version_iri = \"/\".join([head, version, tail])\n\n self._add_obj_triple_spo(\n s=self.storid,\n p=self.world._abbreviate(_version_iri),\n o=self.world._abbreviate(version_iri),\n )\n\n def get_graph(self, **kwargs):\n \"\"\"Returns a new graph object. See emmo.graph.OntoGraph.\n\n Note that this method requires the Python graphviz package.\n \"\"\"\n # pylint: disable=import-outside-toplevel,cyclic-import\n from ontopy.graph import OntoGraph\n\n return OntoGraph(self, **kwargs)\n\n @staticmethod\n def common_ancestors(cls1, cls2):\n \"\"\"Return a list of common ancestors for `cls1` and `cls2`.\"\"\"\n return set(cls1.ancestors()).intersection(cls2.ancestors())\n\n def number_of_generations(self, descendant, ancestor):\n \"\"\"Return shortest distance from ancestor to descendant\"\"\"\n if ancestor not in descendant.ancestors():\n raise ValueError(\"Descendant is not a descendant of ancestor\")\n return self._number_of_generations(descendant, ancestor, 0)\n\n def _number_of_generations(self, descendant, ancestor, counter):\n \"\"\"Recursive help function to number_of_generations(), return\n distance between a ancestor-descendant pair (counter+1).\"\"\"\n if descendant.name == ancestor.name:\n return counter\n try:\n return min(\n self._number_of_generations(parent, ancestor, counter + 1)\n for parent in descendant.get_parents()\n if ancestor in parent.ancestors()\n )\n except ValueError:\n return counter\n\n def closest_common_ancestors(self, cls1, cls2):\n \"\"\"Returns a list with closest_common_ancestor for cls1 and cls2\"\"\"\n distances = {}\n for ancestor in self.common_ancestors(cls1, cls2):\n distances[ancestor] = self.number_of_generations(\n cls1, ancestor\n ) + self.number_of_generations(cls2, ancestor)\n return [\n ancestor\n for ancestor, distance in distances.items()\n if distance == min(distances.values())\n ]\n\n @staticmethod\n def closest_common_ancestor(*classes):\n \"\"\"Returns closest_common_ancestor for the given classes.\"\"\"\n mros = [cls.mro() for cls in classes]\n track = defaultdict(int)\n while mros:\n for mro in mros:\n cur = mro.pop(0)\n track[cur] += 1\n if track[cur] == len(classes):\n return cur\n if len(mro) == 0:\n mros.remove(mro)\n raise EMMOntoPyException(\n \"A closest common ancestor should always exist !\"\n )\n\n def get_ancestors(\n self,\n classes: \"Union[List, ThingClass]\",\n closest: bool = False,\n generations: int = None,\n strict: bool = True,\n ) -> set:\n \"\"\"Return ancestors of all classes in `classes`.\n Args:\n classes: class(es) for which ancestors should be returned.\n generations: Include this number of generations, default is all.\n closest: If True, return all ancestors up to and including the\n closest common ancestor. Return all if False.\n strict: If True returns only real ancestors, i.e. `classes` are\n are not included in the returned set.\n Returns:\n Set of ancestors to `classes`.\n \"\"\"\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n ancestors = set()\n if not classes:\n return ancestors\n\n def addancestors(entity, counter, subject):\n if counter > 0:\n for parent in entity.get_parents(strict=True):\n subject.add(parent)\n addancestors(parent, counter - 1, subject)\n\n if closest:\n if generations is not None:\n raise ValueError(\n \"Only one of `generations` or `closest` may be specified.\"\n )\n\n closest_ancestor = self.closest_common_ancestor(*classes)\n for cls in classes:\n ancestors.update(\n anc\n for anc in cls.ancestors()\n if closest_ancestor in anc.ancestors()\n )\n elif isinstance(generations, int):\n for entity in classes:\n addancestors(entity, generations, ancestors)\n else:\n ancestors.update(*(cls.ancestors() for cls in classes))\n\n if strict:\n return ancestors.difference(classes)\n return ancestors\n\n def get_descendants(\n self,\n classes: \"Union[List, ThingClass]\",\n generations: int = None,\n common: bool = False,\n ) -> set:\n \"\"\"Return descendants/subclasses of all classes in `classes`.\n Args:\n classes: class(es) for which descendants are desired.\n common: whether to only return descendants common to all classes.\n generations: Include this number of generations, default is all.\n Returns:\n A set of descendants for given number of generations.\n If 'common'=True, the common descendants are returned\n within the specified number of generations.\n 'generations' defaults to all.\n \"\"\"\n\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n descendants = {name: [] for name in classes}\n\n def _children_recursively(num, newentity, parent, descendants):\n \"\"\"Helper function to get all children up to generation.\"\"\"\n for child in self.get_children_of(newentity):\n descendants[parent].append(child)\n if num < generations:\n _children_recursively(num + 1, child, parent, descendants)\n\n if generations == 0:\n return set()\n\n if not generations:\n for entity in classes:\n descendants[entity] = entity.descendants()\n # only include proper descendants\n descendants[entity].remove(entity)\n else:\n for entity in classes:\n _children_recursively(1, entity, entity, descendants)\n\n results = descendants.values()\n if common is True:\n return set.intersection(*map(set, results))\n return set(flatten(results))\n\n def get_wu_palmer_measure(self, cls1, cls2):\n \"\"\"Return Wu-Palmer measure for semantic similarity.\n\n Returns Wu-Palmer measure for semantic similarity between\n two concepts.\n Wu, Palmer; ACL 94: Proceedings of the 32nd annual meeting on\n Association for Computational Linguistics, June 1994.\n \"\"\"\n cca = self.closest_common_ancestor(cls1, cls2)\n ccadepth = self.number_of_generations(cca, self.Thing)\n generations1 = self.number_of_generations(cls1, cca)\n generations2 = self.number_of_generations(cls2, cca)\n return 2 * ccadepth / (generations1 + generations2 + 2 * ccadepth)\n\n def new_entity(\n self,\n name: str,\n parent: Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n Iterable,\n ],\n entitytype: Optional[\n Union[\n str,\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]\n ] = \"class\",\n preflabel: Optional[str] = None,\n ) -> Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]:\n \"\"\"Create and return new entity\n\n Args:\n name: name of the entity\n parent: parent(s) of the entity\n entitytype: type of the entity,\n default is 'class' (str) 'ThingClass' (owlready2 Python class).\n Other options\n are 'data_property', 'object_property',\n 'annotation_property' (strings) or the\n Python classes ObjectPropertyClass,\n DataPropertyClass and AnnotationProperty classes.\n preflabel: if given, add this as a skos:prefLabel annotation\n to the new entity. If None (default), `name` will\n be added as prefLabel if skos:prefLabel is in the ontology\n and listed in `self.label_annotations`. Set `preflabel` to\n False, to avoid assigning a prefLabel.\n\n Returns:\n the new entity.\n\n Throws exception if name consists of more than one word, if type is not\n one of the allowed types, or if parent is not of the correct type.\n By default, the parent is Thing.\n\n \"\"\"\n # pylint: disable=invalid-name\n if \" \" in name:\n raise LabelDefinitionError(\n f\"Error in label name definition '{name}': \"\n f\"Label consists of more than one word.\"\n )\n parents = tuple(parent) if isinstance(parent, Iterable) else (parent,)\n if entitytype == \"class\":\n parenttype = owlready2.ThingClass\n elif entitytype == \"data_property\":\n parenttype = owlready2.DataPropertyClass\n elif entitytype == \"object_property\":\n parenttype = owlready2.ObjectPropertyClass\n elif entitytype == \"annotation_property\":\n parenttype = owlready2.AnnotationPropertyClass\n elif entitytype in [\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]:\n parenttype = entitytype\n else:\n raise EntityClassDefinitionError(\n f\"Error in entity type definition: \"\n f\"'{entitytype}' is not a valid entity type.\"\n )\n for thing in parents:\n if not isinstance(thing, parenttype):\n raise EntityClassDefinitionError(\n f\"Error in parent definition: \"\n f\"'{thing}' is not an {parenttype}.\"\n )\n\n with self:\n entity = types.new_class(name, parents)\n\n preflabel_iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n if preflabel:\n if not self.world[preflabel_iri]:\n pref_label = self.new_annotation_property(\n \"prefLabel\",\n parent=[owlready2.AnnotationProperty],\n )\n pref_label.iri = preflabel_iri\n entity.prefLabel = english(preflabel)\n elif (\n preflabel is None\n and preflabel_iri in self.label_annotations\n and self.world[preflabel_iri]\n ):\n entity.prefLabel = english(name)\n\n return entity\n\n # Method that creates new ThingClass using new_entity\n def new_class(\n self, name: str, parent: Union[ThingClass, Iterable]\n ) -> ThingClass:\n \"\"\"Create and return new class.\n\n Args:\n name: name of the class\n parent: parent(s) of the class\n\n Returns:\n the new class.\n \"\"\"\n return self.new_entity(name, parent, \"class\")\n\n # Method that creates new ObjectPropertyClass using new_entity\n def new_object_property(\n self, name: str, parent: Union[ObjectPropertyClass, Iterable]\n ) -> ObjectPropertyClass:\n \"\"\"Create and return new object property.\n\n Args:\n name: name of the object property\n parent: parent(s) of the object property\n\n Returns:\n the new object property.\n \"\"\"\n return self.new_entity(name, parent, \"object_property\")\n\n # Method that creates new DataPropertyClass using new_entity\n def new_data_property(\n self, name: str, parent: Union[DataPropertyClass, Iterable]\n ) -> DataPropertyClass:\n \"\"\"Create and return new data property.\n\n Args:\n name: name of the data property\n parent: parent(s) of the data property\n\n Returns:\n the new data property.\n \"\"\"\n return self.new_entity(name, parent, \"data_property\")\n\n # Method that creates new AnnotationPropertyClass using new_entity\n def new_annotation_property(\n self, name: str, parent: Union[AnnotationPropertyClass, Iterable]\n ) -> AnnotationPropertyClass:\n \"\"\"Create and return new annotation property.\n\n Args:\n name: name of the annotation property\n parent: parent(s) of the annotation property\n\n Returns:\n the new annotation property.\n \"\"\"\n return self.new_entity(name, parent, \"annotation_property\")\n\n def difference(self, other: owlready2.Ontology) -> set:\n \"\"\"Return a set of triples that are in this, but not in the\n `other` ontology.\"\"\"\n # pylint: disable=invalid-name\n s1 = set(self.get_unabbreviated_triples(blank=\"_:b\"))\n s2 = set(other.get_unabbreviated_triples(blank=\"_:b\"))\n return s1.difference(s2)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.colon_in_label","title":"colon_in_label
property
writable
","text":"Whether to accept colon in name-part of IRI. If true, the name cannot be prefixed.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_imported","title":"dir_imported
property
writable
","text":"Whether to include imported ontologies in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_label","title":"dir_label
property
writable
","text":"Whether to include entity label in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_name","title":"dir_name
property
writable
","text":"Whether to include entity name in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_preflabel","title":"dir_preflabel
property
writable
","text":"Whether to include entity prefLabel in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.add_label_annotation","title":"add_label_annotation(self, iri)
","text":"Adds label annotation used by get_by_label().
Source code inontopy/ontology.py
def add_label_annotation(self, iri):\n \"\"\"Adds label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.add_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n if iri not in self.label_annotations:\n self.label_annotations.append(iri)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.annotation_properties","title":"annotation_properties(self, imported=False)
","text":"Returns an generator over all annotation_properties.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def annotation_properties(self, imported=False):\n \"\"\"Returns an generator over all annotation_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n\n \"\"\"\n return self._entities(\"annotation_properties\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.classes","title":"classes(self, imported=False)
","text":"Returns an generator over all classes.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def classes(self, imported=False):\n \"\"\"Returns an generator over all classes.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"classes\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.closest_common_ancestor","title":"closest_common_ancestor(*classes)
staticmethod
","text":"Returns closest_common_ancestor for the given classes.
Source code inontopy/ontology.py
@staticmethod\ndef closest_common_ancestor(*classes):\n \"\"\"Returns closest_common_ancestor for the given classes.\"\"\"\n mros = [cls.mro() for cls in classes]\n track = defaultdict(int)\n while mros:\n for mro in mros:\n cur = mro.pop(0)\n track[cur] += 1\n if track[cur] == len(classes):\n return cur\n if len(mro) == 0:\n mros.remove(mro)\n raise EMMOntoPyException(\n \"A closest common ancestor should always exist !\"\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.closest_common_ancestors","title":"closest_common_ancestors(self, cls1, cls2)
","text":"Returns a list with closest_common_ancestor for cls1 and cls2
Source code inontopy/ontology.py
def closest_common_ancestors(self, cls1, cls2):\n \"\"\"Returns a list with closest_common_ancestor for cls1 and cls2\"\"\"\n distances = {}\n for ancestor in self.common_ancestors(cls1, cls2):\n distances[ancestor] = self.number_of_generations(\n cls1, ancestor\n ) + self.number_of_generations(cls2, ancestor)\n return [\n ancestor\n for ancestor, distance in distances.items()\n if distance == min(distances.values())\n ]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.common_ancestors","title":"common_ancestors(cls1, cls2)
staticmethod
","text":"Return a list of common ancestors for cls1
and cls2
.
ontopy/ontology.py
@staticmethod\ndef common_ancestors(cls1, cls2):\n \"\"\"Return a list of common ancestors for `cls1` and `cls2`.\"\"\"\n return set(cls1.ancestors()).intersection(cls2.ancestors())\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.copy","title":"copy(self)
","text":"Return a copy of the ontology.
Source code inontopy/ontology.py
def copy(self):\n \"\"\"Return a copy of the ontology.\"\"\"\n with tempfile.TemporaryDirectory() as dirname:\n filename = self.save(\n dir=dirname,\n format=\"turtle\",\n recursive=True,\n write_catalog_file=True,\n mkdir=True,\n )\n ontology = get_ontology(filename).load()\n ontology.name = self.name\n return ontology\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.data_properties","title":"data_properties(self, imported=False)
","text":"Returns an generator over all data_properties.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def data_properties(self, imported=False):\n \"\"\"Returns an generator over all data_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"data_properties\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.difference","title":"difference(self, other)
","text":"Return a set of triples that are in this, but not in the other
ontology.
ontopy/ontology.py
def difference(self, other: owlready2.Ontology) -> set:\n \"\"\"Return a set of triples that are in this, but not in the\n `other` ontology.\"\"\"\n # pylint: disable=invalid-name\n s1 = set(self.get_unabbreviated_triples(blank=\"_:b\"))\n s2 = set(other.get_unabbreviated_triples(blank=\"_:b\"))\n return s1.difference(s2)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_ancestors","title":"get_ancestors(self, classes, closest=False, generations=None, strict=True)
","text":"Return ancestors of all classes in classes
.
Parameters:
Name Type Description Defaultclasses
Union[List, ThingClass]
class(es) for which ancestors should be returned.
requiredgenerations
int
Include this number of generations, default is all.
None
closest
bool
If True, return all ancestors up to and including the closest common ancestor. Return all if False.
False
strict
bool
If True returns only real ancestors, i.e. classes
are are not included in the returned set.
True
Returns:
Type Descriptionset
Set of ancestors to classes
.
ontopy/ontology.py
def get_ancestors(\n self,\n classes: \"Union[List, ThingClass]\",\n closest: bool = False,\n generations: int = None,\n strict: bool = True,\n) -> set:\n \"\"\"Return ancestors of all classes in `classes`.\n Args:\n classes: class(es) for which ancestors should be returned.\n generations: Include this number of generations, default is all.\n closest: If True, return all ancestors up to and including the\n closest common ancestor. Return all if False.\n strict: If True returns only real ancestors, i.e. `classes` are\n are not included in the returned set.\n Returns:\n Set of ancestors to `classes`.\n \"\"\"\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n ancestors = set()\n if not classes:\n return ancestors\n\n def addancestors(entity, counter, subject):\n if counter > 0:\n for parent in entity.get_parents(strict=True):\n subject.add(parent)\n addancestors(parent, counter - 1, subject)\n\n if closest:\n if generations is not None:\n raise ValueError(\n \"Only one of `generations` or `closest` may be specified.\"\n )\n\n closest_ancestor = self.closest_common_ancestor(*classes)\n for cls in classes:\n ancestors.update(\n anc\n for anc in cls.ancestors()\n if closest_ancestor in anc.ancestors()\n )\n elif isinstance(generations, int):\n for entity in classes:\n addancestors(entity, generations, ancestors)\n else:\n ancestors.update(*(cls.ancestors() for cls in classes))\n\n if strict:\n return ancestors.difference(classes)\n return ancestors\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_annotations","title":"get_annotations(self, entity)
","text":"Returns a dict with annotations for entity
. Entity may be given either as a ThingClass object or as a label.
ontopy/ontology.py
def get_annotations(self, entity):\n \"\"\"Returns a dict with annotations for `entity`. Entity may be given\n either as a ThingClass object or as a label.\"\"\"\n warnings.warn(\n \"Ontology.get_annotations(entity) is deprecated. Use \"\n \"entity.get_annotations() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n res = {\"comment\": getattr(entity, \"comment\", \"\")}\n for annotation in self.annotation_properties():\n res[annotation.label.first()] = [\n obj.strip('\"')\n for _, _, obj in self.get_triples(\n entity.storid, annotation.storid, None\n )\n ]\n return res\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_branch","title":"get_branch(self, root, leaves=(), include_leaves=True, strict_leaves=False, exclude=None, sort=False)
","text":"Returns a set with all direct and indirect subclasses of root
. Any subclass found in the sequence leaves
will be included in the returned list, but its subclasses will not. The elements of leaves
may be ThingClass objects or labels.
Subclasses of any subclass found in the sequence leaves
will be excluded from the returned list, where the elements of leaves
may be ThingClass objects or labels.
If include_leaves
is true, the leaves are included in the returned list, otherwise they are not.
If strict_leaves
is true, any descendant of a leaf will be excluded in the returned set.
If given, exclude
may be a sequence of classes, including their subclasses, to exclude from the output.
If sort
is True, a list sorted according to depth and label will be returned instead of a set.
ontopy/ontology.py
def get_branch( # pylint: disable=too-many-arguments\n self,\n root,\n leaves=(),\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n sort=False,\n):\n \"\"\"Returns a set with all direct and indirect subclasses of `root`.\n Any subclass found in the sequence `leaves` will be included in\n the returned list, but its subclasses will not. The elements\n of `leaves` may be ThingClass objects or labels.\n\n Subclasses of any subclass found in the sequence `leaves` will\n be excluded from the returned list, where the elements of `leaves`\n may be ThingClass objects or labels.\n\n If `include_leaves` is true, the leaves are included in the returned\n list, otherwise they are not.\n\n If `strict_leaves` is true, any descendant of a leaf will be excluded\n in the returned set.\n\n If given, `exclude` may be a sequence of classes, including\n their subclasses, to exclude from the output.\n\n If `sort` is True, a list sorted according to depth and label\n will be returned instead of a set.\n \"\"\"\n\n def _branch(root, leaves):\n if root not in leaves:\n branch = {\n root,\n }\n for cls in root.subclasses():\n # Defining a branch is actually quite tricky. Consider\n # the case:\n #\n # L isA R\n # A isA L\n # A isA R\n #\n # where R is the root, L is a leaf and A is a direct\n # child of both. Logically, since A is a child of the\n # leaf we want to skip A. But a strait forward imple-\n # mentation will see that A is a child of the root and\n # include it. Requireing that the R should be a strict\n # parent of A solves this.\n if root in cls.get_parents(strict=True):\n branch.update(_branch(cls, leaves))\n else:\n branch = (\n {\n root,\n }\n if include_leaves\n else set()\n )\n return branch\n\n if isinstance(root, str):\n root = self.get_by_label(root)\n\n leaves = set(\n self.get_by_label(leaf) if isinstance(leaf, str) else leaf\n for leaf in leaves\n )\n leaves.discard(root)\n\n if exclude:\n exclude = set(\n self.get_by_label(e) if isinstance(e, str) else e\n for e in exclude\n )\n leaves.update(exclude)\n\n branch = _branch(root, leaves)\n\n # Exclude all descendants of any leaf\n if strict_leaves:\n descendants = root.descendants()\n for leaf in leaves:\n if leaf in descendants:\n branch.difference_update(\n leaf.descendants(include_self=False)\n )\n\n if exclude:\n branch.difference_update(exclude)\n\n # Sort according to depth, then by label\n if sort:\n branch = sorted(\n sorted(branch, key=asstring),\n key=lambda x: len(x.mro()),\n )\n\n return branch\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_by_label","title":"get_by_label(self, label, label_annotations=None, prefix=None, imported=True, colon_in_label=None)
","text":"Returns entity with label annotation label
.
Parameters:
Name Type Description Defaultlabel
str
label so search for. May be written as 'label' or 'prefix:label'. get_by_label('prefix:label') == get_by_label('label', prefix='prefix').
requiredlabel_annotations
str
a sequence of label annotation names to look up. Defaults to the label_annotations
property.
None
prefix
str
if provided, it should be the last component of the base iri of an ontology (with trailing slash (/) or hash (#) stripped off). The search for a matching label will be limited to this namespace.
None
imported
bool
Whether to also look for label
in imported ontologies.
True
colon_in_label
bool
Whether to accept colon (:) in a label or name-part of IRI. Defaults to the colon_in_label
property of self
. Setting this true cannot be combined with prefix
.
None
If several entities have the same label, only the one which is found first is returned.Use get_by_label_all() to get all matches.
Note, if different prefixes are provided in the label and via the prefix
argument a warning will be issued and the prefix
argument will take precedence.
A NoSuchLabelError is raised if label
cannot be found.
ontopy/ontology.py
def get_by_label(\n self,\n label: str,\n label_annotations: str = None,\n prefix: str = None,\n imported: bool = True,\n colon_in_label: bool = None,\n):\n \"\"\"Returns entity with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'.\n get_by_label('prefix:label') ==\n get_by_label('label', prefix='prefix').\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n imported: Whether to also look for `label` in imported ontologies.\n colon_in_label: Whether to accept colon (:) in a label or name-part\n of IRI. Defaults to the `colon_in_label` property of `self`.\n Setting this true cannot be combined with `prefix`.\n\n If several entities have the same label, only the one which is\n found first is returned.Use get_by_label_all() to get all matches.\n\n Note, if different prefixes are provided in the label and via\n the `prefix` argument a warning will be issued and the\n `prefix` argument will take precedence.\n\n A NoSuchLabelError is raised if `label` cannot be found.\n \"\"\"\n # pylint: disable=too-many-arguments,too-many-branches,invalid-name\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, must be a string: '{label}'\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n if colon_in_label is None:\n colon_in_label = self._colon_in_label\n if colon_in_label:\n if prefix:\n raise ValueError(\n \"`prefix` cannot be combined with `colon_in_label`\"\n )\n else:\n splitlabel = label.split(\":\", 1)\n if len(splitlabel) == 2 and not splitlabel[1].startswith(\"//\"):\n label = splitlabel[1]\n if prefix and prefix != splitlabel[0]:\n warnings.warn(\n f\"Prefix given both as argument ({prefix}) \"\n f\"and in label ({splitlabel[0]}). \"\n \"Prefix given in argument takes precedence. \"\n )\n if not prefix:\n prefix = splitlabel[0]\n\n if prefix:\n entityset = self.get_by_label_all(\n label,\n label_annotations=label_annotations,\n prefix=prefix,\n )\n if len(entityset) == 1:\n return entityset.pop()\n if len(entityset) > 1:\n raise AmbiguousLabelError(\n f\"Several entities have the same label '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n raise NoSuchLabelError(\n f\"No label annotations matches for '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n\n # Label is a full IRI\n entity = self.world[label]\n if entity:\n return entity\n\n get_triples = (\n self.world._get_data_triples_spod_spod\n if imported\n else self._get_data_triples_spod_spod\n )\n\n for storid in self._to_storids(label_annotations):\n for s, _, _, _ in get_triples(None, storid, label, None):\n return self.world[self._unabbreviate(s)]\n\n # Special labels\n if self._special_labels and label in self._special_labels:\n return self._special_labels[label]\n\n # Check if label is a name under base_iri\n entity = self.world[self.base_iri + label]\n if entity:\n return entity\n\n # Check label is the name of an entity\n for entity in self.get_entities(imported=imported):\n if label == entity.name:\n return entity\n\n raise NoSuchLabelError(f\"No label annotations matches '{label}'\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_by_label_all","title":"get_by_label_all(self, label, label_annotations=None, prefix=None, exact_match=False)
","text":"Returns set of entities with label annotation label
.
Parameters:
Name Type Description Defaultlabel
label so search for. May be written as 'label' or 'prefix:label'. Wildcard matching using glob pattern is also supported if exact_match
is set to false.
label_annotations
a sequence of label annotation names to look up. Defaults to the label_annotations
property.
None
prefix
if provided, it should be the last component of the base iri of an ontology (with trailing slash (/) or hash (#) stripped off). The search for a matching label will be limited to this namespace.
None
exact_match
Do not treat \"*\" and brackets as special characters when matching. May be useful if your ontology has labels containing such labels.
False
Returns:
Type DescriptionSet[Optional[owlready2.entity.EntityClass]]
Set of all matching entities or an empty set if no matches could be found.
Source code inontopy/ontology.py
def get_by_label_all(\n self,\n label,\n label_annotations=None,\n prefix=None,\n exact_match=False,\n) -> \"Set[Optional[owlready2.entity.EntityClass]]\":\n \"\"\"Returns set of entities with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'. Wildcard matching\n using glob pattern is also supported if `exact_match` is set to\n false.\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n exact_match: Do not treat \"*\" and brackets as special characters\n when matching. May be useful if your ontology has labels\n containing such labels.\n\n Returns:\n Set of all matching entities or an empty set if no matches\n could be found.\n \"\"\"\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, \" f\"must be a string: {label!r}\"\n )\n if \" \" in label:\n raise ValueError(\n f\"Invalid label definition, {label!r} contains spaces.\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n entities = set()\n\n # Check label annotations\n if exact_match:\n for storid in self._to_storids(label_annotations):\n entities.update(\n self.world._get_by_storid(s)\n for s, _, _ in self.world._get_data_triples_spod_spod(\n None, storid, str(label), None\n )\n )\n else:\n for storid in self._to_storids(label_annotations):\n label_entity = self._unabbreviate(storid)\n key = (\n label_entity.name\n if hasattr(label_entity, \"name\")\n else label_entity\n )\n entities.update(self.world.search(**{key: label}))\n\n if self._special_labels and label in self._special_labels:\n entities.update(self._special_labels[label])\n\n # Check name-part of IRI\n if exact_match:\n entities.update(\n ent for ent in self.get_entities() if ent.name == str(label)\n )\n else:\n matches = fnmatch.filter(\n (ent.name for ent in self.get_entities()), label\n )\n entities.update(\n ent for ent in self.get_entities() if ent.name in matches\n )\n\n if prefix:\n return set(\n ent\n for ent in entities\n if ent.namespace.ontology.prefix == prefix\n )\n return entities\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_descendants","title":"get_descendants(self, classes, generations=None, common=False)
","text":"Return descendants/subclasses of all classes in classes
.
Parameters:
Name Type Description Defaultclasses
Union[List, ThingClass]
class(es) for which descendants are desired.
requiredcommon
bool
whether to only return descendants common to all classes.
False
generations
int
Include this number of generations, default is all.
None
Returns:
Type Descriptionset
A set of descendants for given number of generations. If 'common'=True, the common descendants are returned within the specified number of generations. 'generations' defaults to all.
Source code inontopy/ontology.py
def get_descendants(\n self,\n classes: \"Union[List, ThingClass]\",\n generations: int = None,\n common: bool = False,\n) -> set:\n \"\"\"Return descendants/subclasses of all classes in `classes`.\n Args:\n classes: class(es) for which descendants are desired.\n common: whether to only return descendants common to all classes.\n generations: Include this number of generations, default is all.\n Returns:\n A set of descendants for given number of generations.\n If 'common'=True, the common descendants are returned\n within the specified number of generations.\n 'generations' defaults to all.\n \"\"\"\n\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n descendants = {name: [] for name in classes}\n\n def _children_recursively(num, newentity, parent, descendants):\n \"\"\"Helper function to get all children up to generation.\"\"\"\n for child in self.get_children_of(newentity):\n descendants[parent].append(child)\n if num < generations:\n _children_recursively(num + 1, child, parent, descendants)\n\n if generations == 0:\n return set()\n\n if not generations:\n for entity in classes:\n descendants[entity] = entity.descendants()\n # only include proper descendants\n descendants[entity].remove(entity)\n else:\n for entity in classes:\n _children_recursively(1, entity, entity, descendants)\n\n results = descendants.values()\n if common is True:\n return set.intersection(*map(set, results))\n return set(flatten(results))\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_entities","title":"get_entities(self, imported=True, classes=True, individuals=True, object_properties=True, data_properties=True, annotation_properties=True)
","text":"Return a generator over (optionally) all classes, individuals, object_properties, data_properties and annotation_properties.
If imported
is True
, entities in imported ontologies will also be included.
ontopy/ontology.py
def get_entities( # pylint: disable=too-many-arguments\n self,\n imported=True,\n classes=True,\n individuals=True,\n object_properties=True,\n data_properties=True,\n annotation_properties=True,\n):\n \"\"\"Return a generator over (optionally) all classes, individuals,\n object_properties, data_properties and annotation_properties.\n\n If `imported` is `True`, entities in imported ontologies will also\n be included.\n \"\"\"\n generator = []\n if classes:\n generator.append(self.classes(imported))\n if individuals:\n generator.append(self.individuals(imported))\n if object_properties:\n generator.append(self.object_properties(imported))\n if data_properties:\n generator.append(self.data_properties(imported))\n if annotation_properties:\n generator.append(self.annotation_properties(imported))\n yield from itertools.chain(*generator)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_graph","title":"get_graph(self, **kwargs)
","text":"Returns a new graph object. See emmo.graph.OntoGraph.
Note that this method requires the Python graphviz package.
Source code inontopy/ontology.py
def get_graph(self, **kwargs):\n \"\"\"Returns a new graph object. See emmo.graph.OntoGraph.\n\n Note that this method requires the Python graphviz package.\n \"\"\"\n # pylint: disable=import-outside-toplevel,cyclic-import\n from ontopy.graph import OntoGraph\n\n return OntoGraph(self, **kwargs)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_imported_ontologies","title":"get_imported_ontologies(self, recursive=False)
","text":"Return a list with imported ontologies.
If recursive
is True
, ontologies imported by imported ontologies are also returned.
ontopy/ontology.py
def get_imported_ontologies(self, recursive=False):\n \"\"\"Return a list with imported ontologies.\n\n If `recursive` is `True`, ontologies imported by imported ontologies\n are also returned.\n \"\"\"\n\n def rec_imported(onto):\n for ontology in onto.imported_ontologies:\n # pylint: disable=possibly-used-before-assignment\n if ontology not in imported:\n imported.add(ontology)\n rec_imported(ontology)\n\n if recursive:\n imported = set()\n rec_imported(self)\n return list(imported)\n\n return self.imported_ontologies\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_relations","title":"get_relations(self)
","text":"Returns a generator for all relations.
Source code inontopy/ontology.py
def get_relations(self):\n \"\"\"Returns a generator for all relations.\"\"\"\n warnings.warn(\n \"Ontology.get_relations() is deprecated. Use \"\n \"onto.object_properties() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return self.object_properties()\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_root_classes","title":"get_root_classes(self, imported=False)
","text":"Returns a list or root classes.
Source code inontopy/ontology.py
def get_root_classes(self, imported=False):\n \"\"\"Returns a list or root classes.\"\"\"\n return [\n cls\n for cls in self.classes(imported=imported)\n if not cls.ancestors().difference(set([cls, owlready2.Thing]))\n ]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_root_data_properties","title":"get_root_data_properties(self, imported=False)
","text":"Returns a list of root object properties.
Source code inontopy/ontology.py
def get_root_data_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.data_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_root_object_properties","title":"get_root_object_properties(self, imported=False)
","text":"Returns a list of root object properties.
Source code inontopy/ontology.py
def get_root_object_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.object_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_roots","title":"get_roots(self, imported=False)
","text":"Returns all class, object_property and data_property roots.
Source code inontopy/ontology.py
def get_roots(self, imported=False):\n \"\"\"Returns all class, object_property and data_property roots.\"\"\"\n roots = self.get_root_classes(imported=imported)\n roots.extend(self.get_root_object_properties(imported=imported))\n roots.extend(self.get_root_data_properties(imported=imported))\n return roots\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_unabbreviated_triples","title":"get_unabbreviated_triples(self, subject=None, predicate=None, obj=None, blank=None)
","text":"Returns all matching triples unabbreviated.
If blank
is given, it will be used to represent blank nodes.
ontopy/ontology.py
def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n):\n \"\"\"Returns all matching triples unabbreviated.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n # pylint: disable=invalid-name\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_version","title":"get_version(self, as_iri=False)
","text":"Returns the version number of the ontology as inferred from the owl:versionIRI tag or, if owl:versionIRI is not found, from owl:versionINFO.
If as_iri
is True, the full versionIRI is returned.
ontopy/ontology.py
def get_version(self, as_iri=False) -> str:\n \"\"\"Returns the version number of the ontology as inferred from the\n owl:versionIRI tag or, if owl:versionIRI is not found, from\n owl:versionINFO.\n\n If `as_iri` is True, the full versionIRI is returned.\n \"\"\"\n version_iri_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionIRI\"\n )\n tokens = self.get_triples(s=self.storid, p=version_iri_storid)\n if (not tokens) and (as_iri is True):\n raise TypeError(\n \"No owl:versionIRI \"\n f\"in Ontology {self.base_iri!r}. \"\n \"Search for owl:versionInfo with as_iri=False\"\n )\n if tokens:\n _, _, obj = tokens[0]\n version_iri = self.world._unabbreviate(obj)\n if as_iri:\n return version_iri\n return infer_version(self.base_iri, version_iri)\n\n version_info_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionInfo\"\n )\n tokens = self.get_triples(s=self.storid, p=version_info_storid)\n if not tokens:\n raise TypeError(\n \"No versionIRI or versionInfo \" f\"in Ontology {self.base_iri!r}\"\n )\n _, _, version_info = tokens[0]\n return version_info.split(\"^^\")[0].strip('\"')\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_wu_palmer_measure","title":"get_wu_palmer_measure(self, cls1, cls2)
","text":"Return Wu-Palmer measure for semantic similarity.
Returns Wu-Palmer measure for semantic similarity between two concepts. Wu, Palmer; ACL 94: Proceedings of the 32nd annual meeting on Association for Computational Linguistics, June 1994.
Source code inontopy/ontology.py
def get_wu_palmer_measure(self, cls1, cls2):\n \"\"\"Return Wu-Palmer measure for semantic similarity.\n\n Returns Wu-Palmer measure for semantic similarity between\n two concepts.\n Wu, Palmer; ACL 94: Proceedings of the 32nd annual meeting on\n Association for Computational Linguistics, June 1994.\n \"\"\"\n cca = self.closest_common_ancestor(cls1, cls2)\n ccadepth = self.number_of_generations(cca, self.Thing)\n generations1 = self.number_of_generations(cls1, cca)\n generations2 = self.number_of_generations(cls2, cca)\n return 2 * ccadepth / (generations1 + generations2 + 2 * ccadepth)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.individuals","title":"individuals(self, imported=False)
","text":"Returns an generator over all individuals.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def individuals(self, imported=False):\n \"\"\"Returns an generator over all individuals.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"individuals\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.is_defined","title":"is_defined(self, entity)
","text":"Returns true if the entity is a defined class.
Deprecated, use the is_defined
property of the classes (ThingClass subclasses) instead.
ontopy/ontology.py
def is_defined(self, entity):\n \"\"\"Returns true if the entity is a defined class.\n\n Deprecated, use the `is_defined` property of the classes\n (ThingClass subclasses) instead.\n \"\"\"\n warnings.warn(\n \"This method is deprecated. Use the `is_defined` property of \"\n \"the classes instad.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return hasattr(entity, \"equivalent_to\") and bool(entity.equivalent_to)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.is_individual","title":"is_individual(self, entity)
","text":"Returns true if entity is an individual.
Source code inontopy/ontology.py
def is_individual(self, entity):\n \"\"\"Returns true if entity is an individual.\"\"\"\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return isinstance(entity, owlready2.Thing)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.load","title":"load(self, only_local=False, filename=None, format=None, reload=None, reload_if_newer=False, url_from_catalog=None, catalog_file='catalog-v001.xml', emmo_based=True, prefix=None, prefix_emmo=None, **kwargs)
","text":"Load the ontology.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.load--arguments","title":"Arguments","text":"bool
Whether to only read local files. This requires that you have appended the path to the ontology to owlready2.onto_path.
str
Path to file to load the ontology from. Defaults to base_iri
provided to get_ontology().
str
Format of filename
. Default is inferred from filename
extension.
bool
Whether to reload the ontology if it is already loaded.
bool
Whether to reload the ontology if the source has changed since last time it was loaded.
bool | None
Whether to use catalog file to resolve the location of base_iri
. If None, the catalog file is used if it exists in the same directory as filename
.
str
Name of Prot\u00e8g\u00e8 catalog file in the same folder as the ontology. This option is used together with only_local
and defaults to \"catalog-v001.xml\".
bool
Whether this is an EMMO-based ontology or not, default True
.
prefix: defaults to self.get_namespace.name if
bool, default None. If emmo_based is True it
defaults to True and sets the prefix of all imported ontologies with base_iri starting with 'http://emmo.info/emmo' to emmo
Kwargs
Additional keyword arguments are passed on to owlready2.Ontology.load().
Source code inontopy/ontology.py
def load( # pylint: disable=too-many-arguments,arguments-renamed\n self,\n only_local=False,\n filename=None,\n format=None, # pylint: disable=redefined-builtin\n reload=None,\n reload_if_newer=False,\n url_from_catalog=None,\n catalog_file=\"catalog-v001.xml\",\n emmo_based=True,\n prefix=None,\n prefix_emmo=None,\n **kwargs,\n):\n \"\"\"Load the ontology.\n\n Arguments\n ---------\n only_local: bool\n Whether to only read local files. This requires that you\n have appended the path to the ontology to owlready2.onto_path.\n filename: str\n Path to file to load the ontology from. Defaults to `base_iri`\n provided to get_ontology().\n format: str\n Format of `filename`. Default is inferred from `filename`\n extension.\n reload: bool\n Whether to reload the ontology if it is already loaded.\n reload_if_newer: bool\n Whether to reload the ontology if the source has changed since\n last time it was loaded.\n url_from_catalog: bool | None\n Whether to use catalog file to resolve the location of `base_iri`.\n If None, the catalog file is used if it exists in the same\n directory as `filename`.\n catalog_file: str\n Name of Prot\u00e8g\u00e8 catalog file in the same folder as the\n ontology. This option is used together with `only_local` and\n defaults to \"catalog-v001.xml\".\n emmo_based: bool\n Whether this is an EMMO-based ontology or not, default `True`.\n prefix: defaults to self.get_namespace.name if\n prefix_emmo: bool, default None. If emmo_based is True it\n defaults to True and sets the prefix of all imported ontologies\n with base_iri starting with 'http://emmo.info/emmo' to emmo\n kwargs:\n Additional keyword arguments are passed on to\n owlready2.Ontology.load().\n \"\"\"\n # TODO: make sure that `only_local` argument is respected...\n\n if self.loaded:\n return self\n self._load(\n only_local=only_local,\n filename=filename,\n format=format,\n reload=reload,\n reload_if_newer=reload_if_newer,\n url_from_catalog=url_from_catalog,\n catalog_file=catalog_file,\n **kwargs,\n )\n\n # Enable optimised search by get_by_label()\n if self._special_labels is None and emmo_based:\n top = self.world[\"http://www.w3.org/2002/07/owl#topObjectProperty\"]\n self._special_labels = {\n \"Thing\": owlready2.Thing,\n \"Nothing\": owlready2.Nothing,\n \"topObjectProperty\": top,\n \"owl:Thing\": owlready2.Thing,\n \"owl:Nothing\": owlready2.Nothing,\n \"owl:topObjectProperty\": top,\n }\n # set prefix if another prefix is desired\n # if we do this, shouldn't we make the name of all\n # entities of the given ontology to the same?\n if prefix:\n self.prefix = prefix\n else:\n self.prefix = self.name\n\n if emmo_based and prefix_emmo is None:\n prefix_emmo = True\n if prefix_emmo:\n self.set_common_prefix()\n\n return self\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_annotation_property","title":"new_annotation_property(self, name, parent)
","text":"Create and return new annotation property.
Parameters:
Name Type Description Defaultname
str
name of the annotation property
requiredparent
Union[owlready2.annotation.AnnotationPropertyClass, collections.abc.Iterable]
parent(s) of the annotation property
requiredReturns:
Type DescriptionAnnotationPropertyClass
the new annotation property.
Source code inontopy/ontology.py
def new_annotation_property(\n self, name: str, parent: Union[AnnotationPropertyClass, Iterable]\n) -> AnnotationPropertyClass:\n \"\"\"Create and return new annotation property.\n\n Args:\n name: name of the annotation property\n parent: parent(s) of the annotation property\n\n Returns:\n the new annotation property.\n \"\"\"\n return self.new_entity(name, parent, \"annotation_property\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_class","title":"new_class(self, name, parent)
","text":"Create and return new class.
Parameters:
Name Type Description Defaultname
str
name of the class
requiredparent
Union[owlready2.entity.ThingClass, collections.abc.Iterable]
parent(s) of the class
requiredReturns:
Type DescriptionThingClass
the new class.
Source code inontopy/ontology.py
def new_class(\n self, name: str, parent: Union[ThingClass, Iterable]\n) -> ThingClass:\n \"\"\"Create and return new class.\n\n Args:\n name: name of the class\n parent: parent(s) of the class\n\n Returns:\n the new class.\n \"\"\"\n return self.new_entity(name, parent, \"class\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_data_property","title":"new_data_property(self, name, parent)
","text":"Create and return new data property.
Parameters:
Name Type Description Defaultname
str
name of the data property
requiredparent
Union[owlready2.prop.DataPropertyClass, collections.abc.Iterable]
parent(s) of the data property
requiredReturns:
Type DescriptionDataPropertyClass
the new data property.
Source code inontopy/ontology.py
def new_data_property(\n self, name: str, parent: Union[DataPropertyClass, Iterable]\n) -> DataPropertyClass:\n \"\"\"Create and return new data property.\n\n Args:\n name: name of the data property\n parent: parent(s) of the data property\n\n Returns:\n the new data property.\n \"\"\"\n return self.new_entity(name, parent, \"data_property\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_entity","title":"new_entity(self, name, parent, entitytype='class', preflabel=None)
","text":"Create and return new entity
Parameters:
Name Type Description Defaultname
str
name of the entity
requiredparent
Union[owlready2.entity.ThingClass, owlready2.prop.ObjectPropertyClass, owlready2.prop.DataPropertyClass, owlready2.annotation.AnnotationPropertyClass, collections.abc.Iterable]
parent(s) of the entity
requiredentitytype
Union[str, owlready2.entity.ThingClass, owlready2.prop.ObjectPropertyClass, owlready2.prop.DataPropertyClass, owlready2.annotation.AnnotationPropertyClass]
type of the entity, default is 'class' (str) 'ThingClass' (owlready2 Python class). Other options are 'data_property', 'object_property', 'annotation_property' (strings) or the Python classes ObjectPropertyClass, DataPropertyClass and AnnotationProperty classes.
'class'
preflabel
Optional[str]
if given, add this as a skos:prefLabel annotation to the new entity. If None (default), name
will be added as prefLabel if skos:prefLabel is in the ontology and listed in self.label_annotations
. Set preflabel
to False, to avoid assigning a prefLabel.
None
Returns:
Type DescriptionUnion[owlready2.entity.ThingClass, owlready2.prop.ObjectPropertyClass, owlready2.prop.DataPropertyClass, owlready2.annotation.AnnotationPropertyClass]
the new entity.
Throws exception if name consists of more than one word, if type is not one of the allowed types, or if parent is not of the correct type. By default, the parent is Thing.
Source code inontopy/ontology.py
def new_entity(\n self,\n name: str,\n parent: Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n Iterable,\n ],\n entitytype: Optional[\n Union[\n str,\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]\n ] = \"class\",\n preflabel: Optional[str] = None,\n) -> Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n]:\n \"\"\"Create and return new entity\n\n Args:\n name: name of the entity\n parent: parent(s) of the entity\n entitytype: type of the entity,\n default is 'class' (str) 'ThingClass' (owlready2 Python class).\n Other options\n are 'data_property', 'object_property',\n 'annotation_property' (strings) or the\n Python classes ObjectPropertyClass,\n DataPropertyClass and AnnotationProperty classes.\n preflabel: if given, add this as a skos:prefLabel annotation\n to the new entity. If None (default), `name` will\n be added as prefLabel if skos:prefLabel is in the ontology\n and listed in `self.label_annotations`. Set `preflabel` to\n False, to avoid assigning a prefLabel.\n\n Returns:\n the new entity.\n\n Throws exception if name consists of more than one word, if type is not\n one of the allowed types, or if parent is not of the correct type.\n By default, the parent is Thing.\n\n \"\"\"\n # pylint: disable=invalid-name\n if \" \" in name:\n raise LabelDefinitionError(\n f\"Error in label name definition '{name}': \"\n f\"Label consists of more than one word.\"\n )\n parents = tuple(parent) if isinstance(parent, Iterable) else (parent,)\n if entitytype == \"class\":\n parenttype = owlready2.ThingClass\n elif entitytype == \"data_property\":\n parenttype = owlready2.DataPropertyClass\n elif entitytype == \"object_property\":\n parenttype = owlready2.ObjectPropertyClass\n elif entitytype == \"annotation_property\":\n parenttype = owlready2.AnnotationPropertyClass\n elif entitytype in [\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]:\n parenttype = entitytype\n else:\n raise EntityClassDefinitionError(\n f\"Error in entity type definition: \"\n f\"'{entitytype}' is not a valid entity type.\"\n )\n for thing in parents:\n if not isinstance(thing, parenttype):\n raise EntityClassDefinitionError(\n f\"Error in parent definition: \"\n f\"'{thing}' is not an {parenttype}.\"\n )\n\n with self:\n entity = types.new_class(name, parents)\n\n preflabel_iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n if preflabel:\n if not self.world[preflabel_iri]:\n pref_label = self.new_annotation_property(\n \"prefLabel\",\n parent=[owlready2.AnnotationProperty],\n )\n pref_label.iri = preflabel_iri\n entity.prefLabel = english(preflabel)\n elif (\n preflabel is None\n and preflabel_iri in self.label_annotations\n and self.world[preflabel_iri]\n ):\n entity.prefLabel = english(name)\n\n return entity\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_object_property","title":"new_object_property(self, name, parent)
","text":"Create and return new object property.
Parameters:
Name Type Description Defaultname
str
name of the object property
requiredparent
Union[owlready2.prop.ObjectPropertyClass, collections.abc.Iterable]
parent(s) of the object property
requiredReturns:
Type DescriptionObjectPropertyClass
the new object property.
Source code inontopy/ontology.py
def new_object_property(\n self, name: str, parent: Union[ObjectPropertyClass, Iterable]\n) -> ObjectPropertyClass:\n \"\"\"Create and return new object property.\n\n Args:\n name: name of the object property\n parent: parent(s) of the object property\n\n Returns:\n the new object property.\n \"\"\"\n return self.new_entity(name, parent, \"object_property\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.number_of_generations","title":"number_of_generations(self, descendant, ancestor)
","text":"Return shortest distance from ancestor to descendant
Source code inontopy/ontology.py
def number_of_generations(self, descendant, ancestor):\n \"\"\"Return shortest distance from ancestor to descendant\"\"\"\n if ancestor not in descendant.ancestors():\n raise ValueError(\"Descendant is not a descendant of ancestor\")\n return self._number_of_generations(descendant, ancestor, 0)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.object_properties","title":"object_properties(self, imported=False)
","text":"Returns an generator over all object_properties.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def object_properties(self, imported=False):\n \"\"\"Returns an generator over all object_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"object_properties\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.remove_label_annotation","title":"remove_label_annotation(self, iri)
","text":"Removes label annotation used by get_by_label().
Source code inontopy/ontology.py
def remove_label_annotation(self, iri):\n \"\"\"Removes label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.remove_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n try:\n self.label_annotations.remove(iri)\n except ValueError:\n pass\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.rename_entities","title":"rename_entities(self, annotations=('prefLabel', 'label', 'altLabel'))
","text":"Set name
of all entities to the first non-empty annotation in annotations
.
Warning, this method changes all IRIs in the ontology. However, it may be useful to make the ontology more readable and to work with it together with a triple store.
Source code inontopy/ontology.py
def rename_entities(\n self,\n annotations=(\"prefLabel\", \"label\", \"altLabel\"),\n):\n \"\"\"Set `name` of all entities to the first non-empty annotation in\n `annotations`.\n\n Warning, this method changes all IRIs in the ontology. However,\n it may be useful to make the ontology more readable and to work\n with it together with a triple store.\n \"\"\"\n for entity in self.get_entities():\n for annotation in annotations:\n if hasattr(entity, annotation):\n name = getattr(entity, annotation).first()\n if name:\n entity.name = name\n break\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.save","title":"save(self, filename=None, format=None, dir='.', mkdir=False, overwrite=False, recursive=False, squash=False, write_catalog_file=False, append_catalog=False, catalog_file='catalog-v001.xml', **kwargs)
","text":"Writes the ontology to file.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.save--parameters","title":"Parameters","text":"None | str | Path
Name of file to write to. If None, it defaults to the name of the ontology with format
as file extension.
str
Output format. The default is to infer it from filename
.
str | Path
If filename
is a relative path, it is a relative path to dir
.
bool
Whether to create output directory if it does not exists.
bool
If true and filename
exists, remove the existing file before saving. The default is to append to an existing ontology.
bool
Whether to save imported ontologies recursively. This is commonly combined with filename=None
, dir
and mkdir
. Note that depending on the structure of the ontology and all imports the ontology might end up in a subdirectory. If filename is given, the ontology is saved to the given directory. The path to the final location is returned.
bool
If true, rdflib will be used to save the current ontology together with all its sub-ontologies into filename
. It makes no sense to combine this with recursive
.
bool
Whether to also write a catalog file to disk.
bool
Whether to append to an existing catalog file.
str | Path
Name of catalog file. If not an absolute path, it is prepended to dir
.
The path to the saved ontology.\n
Source code in ontopy/ontology.py
def save(\n self,\n filename=None,\n format=None,\n dir=\".\",\n mkdir=False,\n overwrite=False,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n append_catalog=False,\n catalog_file=\"catalog-v001.xml\",\n **kwargs,\n) -> Path:\n \"\"\"Writes the ontology to file.\n\n Parameters\n ----------\n filename: None | str | Path\n Name of file to write to. If None, it defaults to the name\n of the ontology with `format` as file extension.\n format: str\n Output format. The default is to infer it from `filename`.\n dir: str | Path\n If `filename` is a relative path, it is a relative path to `dir`.\n mkdir: bool\n Whether to create output directory if it does not exists.\n owerwrite: bool\n If true and `filename` exists, remove the existing file before\n saving. The default is to append to an existing ontology.\n recursive: bool\n Whether to save imported ontologies recursively. This is\n commonly combined with `filename=None`, `dir` and `mkdir`.\n Note that depending on the structure of the ontology and\n all imports the ontology might end up in a subdirectory.\n If filename is given, the ontology is saved to the given\n directory.\n The path to the final location is returned.\n squash: bool\n If true, rdflib will be used to save the current ontology\n together with all its sub-ontologies into `filename`.\n It makes no sense to combine this with `recursive`.\n write_catalog_file: bool\n Whether to also write a catalog file to disk.\n append_catalog: bool\n Whether to append to an existing catalog file.\n catalog_file: str | Path\n Name of catalog file. If not an absolute path, it is prepended\n to `dir`.\n\n Returns\n --------\n The path to the saved ontology.\n \"\"\"\n # pylint: disable=redefined-builtin,too-many-arguments\n # pylint: disable=too-many-statements,too-many-branches\n # pylint: disable=too-many-locals,arguments-renamed,invalid-name\n\n if not _validate_installed_version(\n package=\"rdflib\", min_version=\"6.0.0\"\n ) and format == FMAP.get(\"ttl\", \"\"):\n from rdflib import ( # pylint: disable=import-outside-toplevel\n __version__ as __rdflib_version__,\n )\n\n warnings.warn(\n IncompatibleVersion(\n \"To correctly convert to Turtle format, rdflib must be \"\n \"version 6.0.0 or greater, however, the detected rdflib \"\n \"version used by your Python interpreter is \"\n f\"{__rdflib_version__!r}. For more information see the \"\n \"'Known issues' section of the README.\"\n )\n )\n revmap = {value: key for key, value in FMAP.items()}\n if filename is None:\n if format:\n fmt = revmap.get(format, format)\n file = f\"{self.name}.{fmt}\"\n else:\n raise TypeError(\"`filename` and `format` cannot both be None.\")\n else:\n file = filename\n filepath = os.path.join(\n dir, file if isinstance(file, (str, Path)) else file.name\n )\n returnpath = filepath\n\n dir = Path(filepath).resolve().parent\n\n if mkdir:\n outdir = Path(filepath).parent.resolve()\n if not outdir.exists():\n outdir.mkdir(parents=True)\n\n if not format:\n format = guess_format(file, fmap=FMAP)\n fmt = revmap.get(format, format)\n\n if overwrite and os.path.exists(filepath):\n os.remove(filepath)\n\n if recursive:\n if squash:\n raise ValueError(\n \"`recursive` and `squash` should not both be true\"\n )\n layout = directory_layout(self)\n if filename:\n layout[self] = file.rstrip(f\".{fmt}\")\n # Update path to where the ontology is saved\n # Note that filename should include format\n # when given\n returnpath = Path(dir) / f\"{layout[self]}.{fmt}\"\n for onto, path in layout.items():\n fname = Path(dir) / f\"{path}.{fmt}\"\n onto.save(\n filename=fname,\n format=format,\n dir=dir,\n mkdir=mkdir,\n overwrite=overwrite,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n **kwargs,\n )\n\n if write_catalog_file:\n catalog_files = set()\n irimap = {}\n for onto, path in layout.items():\n irimap[onto.get_version(as_iri=True)] = (\n f\"{dir}/{path}.{fmt}\"\n )\n catalog_files.add(Path(path).parent / catalog_file)\n\n for catfile in catalog_files:\n write_catalog(\n irimap.copy(),\n output=catfile,\n directory=dir,\n append=append_catalog,\n )\n elif squash:\n URIRef, RDF, OWL = rdflib.URIRef, rdflib.RDF, rdflib.OWL\n\n # Make a copy of the owlready2 graph object to not mess with\n # owlready2 internals\n graph = rdflib.Graph()\n for triple in self.world.as_rdflib_graph():\n graph.add(triple)\n\n # Add common namespaces unknown to rdflib\n extra_namespaces = [\n (\"\", self.base_iri),\n (\"swrl\", \"http://www.w3.org/2003/11/swrl#\"),\n (\"bibo\", \"http://purl.org/ontology/bibo/\"),\n ]\n for prefix, iri in extra_namespaces:\n graph.namespace_manager.bind(\n prefix, rdflib.Namespace(iri), override=False\n )\n\n # Remove all ontology-declarations in the graph that are\n # not the current ontology.\n for s, _, _ in graph.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n if str(s).rstrip(\"/#\") != self.base_iri.rstrip(\"/#\"):\n for (\n _,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (s, None, None)\n ):\n graph.remove((s, p, o))\n graph.remove((s, OWL.imports, None))\n\n # Insert correct IRI of the ontology\n if self.iri:\n base_iri = URIRef(self.base_iri)\n for s, p, o in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((URIRef(self.iri), p, o))\n\n graph.serialize(destination=filepath, format=format)\n elif format in OWLREADY2_FORMATS:\n super().save(file=filepath, format=fmt, **kwargs)\n else:\n # The try-finally clause is needed for cleanup and because\n # we have to provide delete=False to NamedTemporaryFile\n # since Windows does not allow to reopen an already open\n # file.\n try:\n with tempfile.NamedTemporaryFile(\n suffix=\".owl\", delete=False\n ) as handle:\n tmpfile = handle.name\n super().save(tmpfile, format=\"ntriples\", **kwargs)\n graph = rdflib.Graph()\n graph.parse(tmpfile, format=\"ntriples\")\n graph.namespace_manager.bind(\n \"\", rdflib.Namespace(self.base_iri)\n )\n if self.iri:\n base_iri = rdflib.URIRef(self.base_iri)\n for (\n s,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((rdflib.URIRef(self.iri), p, o))\n graph.serialize(destination=filepath, format=format)\n finally:\n os.remove(tmpfile)\n\n if write_catalog_file and not recursive:\n write_catalog(\n {self.get_version(as_iri=True): filepath},\n output=catalog_file,\n directory=dir,\n append=append_catalog,\n )\n return Path(returnpath)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.set_common_prefix","title":"set_common_prefix(self, iri_base='http://emmo.info/emmo', prefix='emmo', visited=None)
","text":"Set a common prefix for all imported ontologies with the same first part of the base_iri.
Parameters:
Name Type Description Defaultiri_base
str
The start of the base_iri to look for. Defaults to the emmo base_iri http://emmo.info/emmo
'http://emmo.info/emmo'
prefix
str
the desired prefix. Defaults to emmo.
'emmo'
visited
Optional[Set]
Ontologies to skip. Only intended for internal use.
None
Source code in ontopy/ontology.py
def set_common_prefix(\n self,\n iri_base: str = \"http://emmo.info/emmo\",\n prefix: str = \"emmo\",\n visited: \"Optional[Set]\" = None,\n) -> None:\n \"\"\"Set a common prefix for all imported ontologies\n with the same first part of the base_iri.\n\n Args:\n iri_base: The start of the base_iri to look for. Defaults to\n the emmo base_iri http://emmo.info/emmo\n prefix: the desired prefix. Defaults to emmo.\n visited: Ontologies to skip. Only intended for internal use.\n \"\"\"\n if visited is None:\n visited = set()\n if self.base_iri.startswith(iri_base):\n self.prefix = prefix\n for onto in self.imported_ontologies:\n if not onto in visited:\n visited.add(onto)\n onto.set_common_prefix(\n iri_base=iri_base, prefix=prefix, visited=visited\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.set_default_label_annotations","title":"set_default_label_annotations(self)
","text":"Sets the default label annotations.
Source code inontopy/ontology.py
def set_default_label_annotations(self):\n \"\"\"Sets the default label annotations.\"\"\"\n warnings.warn(\n \"Ontology.set_default_label_annotations() is deprecated. \"\n \"Default label annotations are set by Ontology.__init__(). \",\n DeprecationWarning,\n stacklevel=2,\n )\n self.label_annotations = DEFAULT_LABEL_ANNOTATIONS[:]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.set_version","title":"set_version(self, version=None, version_iri=None)
","text":"Assign version to ontology by asigning owl:versionIRI.
If version
but not version_iri
is provided, the version IRI will be the combination of base_iri
and version
.
ontopy/ontology.py
def set_version(self, version=None, version_iri=None):\n \"\"\"Assign version to ontology by asigning owl:versionIRI.\n\n If `version` but not `version_iri` is provided, the version\n IRI will be the combination of `base_iri` and `version`.\n \"\"\"\n _version_iri = \"http://www.w3.org/2002/07/owl#versionIRI\"\n version_iri_storid = self.world._abbreviate(_version_iri)\n if self._has_obj_triple_spo( # pylint: disable=unexpected-keyword-arg\n # For some reason _has_obj_triples_spo exists in both\n # owlready2.namespace.Namespace (with arguments subject/predicate)\n # and in owlready2.triplelite._GraphManager (with arguments s/p)\n # owlready2.Ontology inherits from Namespace directly\n # and pylint checks that.\n # It actually accesses the one in triplelite.\n # subject=self.storid, predicate=version_iri_storid\n s=self.storid,\n p=version_iri_storid,\n ):\n self._del_obj_triple_spo(s=self.storid, p=version_iri_storid)\n\n if not version_iri:\n if not version:\n raise TypeError(\n \"Either `version` or `version_iri` must be provided\"\n )\n head, tail = self.base_iri.rstrip(\"#/\").rsplit(\"/\", 1)\n version_iri = \"/\".join([head, version, tail])\n\n self._add_obj_triple_spo(\n s=self.storid,\n p=self.world._abbreviate(_version_iri),\n o=self.world._abbreviate(version_iri),\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.sync_attributes","title":"sync_attributes(self, name_policy=None, name_prefix='', class_docstring='comment', sync_imported=False)
","text":"This method is intended to be called after you have added new classes (typically via Python) to make sure that attributes like label
and comments
are defined.
If a class, object property, data property or annotation property in the current ontology has no label, the name of the corresponding Python class will be assigned as label.
If a class, object property, data property or annotation property has no comment, it will be assigned the docstring of the corresponding Python class.
name_policy
specify wether and how the names in the ontology should be updated. Valid values are: None not changed \"uuid\" name_prefix
followed by a global unique id (UUID). If the name is already valid accoridng to this standard it will not be regenerated. \"sequential\" name_prefix
followed a sequantial number. EMMO conventions imply name_policy=='uuid'
.
If sync_imported
is true, all imported ontologies are also updated.
The class_docstring
argument specifies the annotation that class docstrings are mapped to. Defaults to \"comment\".
ontopy/ontology.py
def sync_attributes( # pylint: disable=too-many-branches\n self,\n name_policy=None,\n name_prefix=\"\",\n class_docstring=\"comment\",\n sync_imported=False,\n):\n \"\"\"This method is intended to be called after you have added new\n classes (typically via Python) to make sure that attributes like\n `label` and `comments` are defined.\n\n If a class, object property, data property or annotation\n property in the current ontology has no label, the name of\n the corresponding Python class will be assigned as label.\n\n If a class, object property, data property or annotation\n property has no comment, it will be assigned the docstring of\n the corresponding Python class.\n\n `name_policy` specify wether and how the names in the ontology\n should be updated. Valid values are:\n None not changed\n \"uuid\" `name_prefix` followed by a global unique id (UUID).\n If the name is already valid accoridng to this standard\n it will not be regenerated.\n \"sequential\" `name_prefix` followed a sequantial number.\n EMMO conventions imply ``name_policy=='uuid'``.\n\n If `sync_imported` is true, all imported ontologies are also\n updated.\n\n The `class_docstring` argument specifies the annotation that\n class docstrings are mapped to. Defaults to \"comment\".\n \"\"\"\n for cls in itertools.chain(\n self.classes(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n ):\n if not hasattr(cls, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=unused-variable\n class prefLabel(owlready2.label):\n pass\n\n cls.prefLabel = [locstr(cls.__name__, lang=\"en\")]\n elif not cls.prefLabel:\n cls.prefLabel.append(locstr(cls.__name__, lang=\"en\"))\n if class_docstring and hasattr(cls, \"__doc__\") and cls.__doc__:\n getattr(cls, class_docstring).append(\n locstr(inspect.cleandoc(cls.__doc__), lang=\"en\")\n )\n\n for ind in self.individuals():\n if not hasattr(ind, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=function-redefined\n class prefLabel(owlready2.label):\n iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n\n ind.prefLabel = [locstr(ind.name, lang=\"en\")]\n elif not ind.prefLabel:\n ind.prefLabel.append(locstr(ind.name, lang=\"en\"))\n\n chain = itertools.chain(\n self.classes(),\n self.individuals(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n )\n if name_policy == \"uuid\":\n for obj in chain:\n try:\n # Passing the following means that the name is valid\n # and need not be regenerated.\n if not obj.name.startswith(name_prefix):\n raise ValueError\n uuid.UUID(obj.name.lstrip(name_prefix), version=5)\n except ValueError:\n obj.name = name_prefix + str(\n uuid.uuid5(uuid.NAMESPACE_DNS, obj.name)\n )\n elif name_policy == \"sequential\":\n for obj in chain:\n counter = 0\n while f\"{self.base_iri}{name_prefix}{counter}\" in self:\n counter += 1\n obj.name = f\"{name_prefix}{counter}\"\n elif name_policy is not None:\n raise TypeError(f\"invalid name_policy: {name_policy!r}\")\n\n if sync_imported:\n for onto in self.imported_ontologies:\n onto.sync_attributes()\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.sync_python_names","title":"sync_python_names(self, annotations=('prefLabel', 'label', 'altLabel'))
","text":"Update the python_name
attribute of all properties.
The python_name attribute will be set to the first non-empty annotation in the sequence of annotations in annotations
for the property.
ontopy/ontology.py
def sync_python_names(self, annotations=(\"prefLabel\", \"label\", \"altLabel\")):\n \"\"\"Update the `python_name` attribute of all properties.\n\n The python_name attribute will be set to the first non-empty\n annotation in the sequence of annotations in `annotations` for\n the property.\n \"\"\"\n\n def update(gen):\n for prop in gen:\n for annotation in annotations:\n if hasattr(prop, annotation) and getattr(prop, annotation):\n prop.python_name = getattr(prop, annotation).first()\n break\n\n update(\n self.get_entities(\n classes=False,\n individuals=False,\n object_properties=False,\n data_properties=False,\n )\n )\n update(\n self.get_entities(\n classes=False, individuals=False, annotation_properties=False\n )\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.sync_reasoner","title":"sync_reasoner(self, reasoner='HermiT', include_imported=False, **kwargs)
","text":"Update current ontology by running the given reasoner.
Supported values for reasoner
are 'HermiT' (default), Pellet and 'FaCT++'.
If include_imported
is true, the reasoner will also reason over imported ontologies. Note that this may be very slow.
Keyword arguments are passed to the underlying owlready2 function.
Source code inontopy/ontology.py
def sync_reasoner(\n self, reasoner=\"HermiT\", include_imported=False, **kwargs\n):\n \"\"\"Update current ontology by running the given reasoner.\n\n Supported values for `reasoner` are 'HermiT' (default), Pellet\n and 'FaCT++'.\n\n If `include_imported` is true, the reasoner will also reason\n over imported ontologies. Note that this may be **very** slow.\n\n Keyword arguments are passed to the underlying owlready2 function.\n \"\"\"\n # pylint: disable=too-many-branches\n\n removed_equivalent = defaultdict(list)\n removed_subclasses = defaultdict(list)\n\n if reasoner == \"FaCT++\":\n sync = sync_reasoner_factpp\n elif reasoner == \"Pellet\":\n sync = owlready2.sync_reasoner_pellet\n elif reasoner == \"HermiT\":\n sync = owlready2.sync_reasoner_hermit\n\n # Remove custom data propertyes, otherwise HermiT will crash\n datatype_iri = \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n\n for cls in self.classes(imported=include_imported):\n remove_eq = []\n for i, r in enumerate(cls.equivalent_to):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_eq.append(i)\n removed_equivalent[cls].append(r)\n for i in reversed(remove_eq):\n del cls.equivalent_to[i]\n\n remove_subcls = []\n for i, r in enumerate(cls.is_a):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_subcls.append(i)\n removed_subclasses[cls].append(r)\n for i in reversed(remove_subcls):\n del cls.is_a[i]\n\n else:\n raise ValueError(\n f\"Unknown reasoner '{reasoner}'. Supported reasoners \"\n \"are 'Pellet', 'HermiT' and 'FaCT++'.\"\n )\n\n # For some reason we must visit all entities once before running\n # the reasoner...\n list(self.get_entities())\n\n with self:\n if include_imported:\n sync(self.world, **kwargs)\n else:\n sync(self, **kwargs)\n\n # Restore removed custom data properties\n for cls, eqs in removed_equivalent.items():\n cls.extend(eqs)\n for cls, subcls in removed_subclasses.items():\n cls.extend(subcls)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.World","title":" World (World)
","text":"A subclass of owlready2.World.
Source code inontopy/ontology.py
class World(owlready2.World):\n \"\"\"A subclass of owlready2.World.\"\"\"\n\n def __init__(self, *args, **kwargs):\n # Caches stored in the world\n self._cached_catalogs = {} # maps url to (mtime, iris, dirs)\n self._iri_mappings = {} # all iri mappings loaded so far\n super().__init__(*args, **kwargs)\n\n def get_ontology(\n self,\n base_iri: str = \"emmo-inferred\",\n OntologyClass: \"owlready2.Ontology\" = None,\n label_annotations: \"Sequence\" = None,\n ) -> \"Ontology\":\n # pylint: disable=too-many-branches\n \"\"\"Returns a new Ontology from `base_iri`.\n\n Arguments:\n base_iri: The base IRI of the ontology. May be one of:\n - valid URL (possible excluding final .owl or .ttl)\n - file name (possible excluding final .owl or .ttl)\n - \"emmo\": load latest version of asserted EMMO\n - \"emmo-inferred\": load latest version of inferred EMMO\n (default)\n - \"emmo-development\": load latest inferred development\n version of EMMO. Until first stable release\n emmo-inferred and emmo-development will be the same.\n OntologyClass: If given and `base_iri` doesn't correspond\n to an existing ontology, a new ontology is created of\n this Ontology subclass. Defaults to `ontopy.Ontology`.\n label_annotations: Sequence of label IRIs used for accessing\n entities in the ontology given that they are in the ontology.\n Label IRIs not in the ontology will need to be added to\n ontologies in order to be accessible.\n Defaults to DEFAULT_LABEL_ANNOTATIONS if set to None.\n \"\"\"\n base_iri = base_iri.as_uri() if isinstance(base_iri, Path) else base_iri\n\n if base_iri == \"emmo\":\n base_iri = (\n \"http://emmo-repo.github.io/versions/1.0.0-beta4/emmo.ttl\"\n )\n elif base_iri == \"emmo-inferred\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta4/\"\n \"emmo-inferred.ttl\"\n )\n elif base_iri == \"emmo-development\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta5/\"\n \"emmo-inferred.ttl\"\n )\n\n if base_iri in self.ontologies:\n onto = self.ontologies[base_iri]\n elif base_iri + \"#\" in self.ontologies:\n onto = self.ontologies[base_iri + \"#\"]\n elif base_iri + \"/\" in self.ontologies:\n onto = self.ontologies[base_iri + \"/\"]\n else:\n if os.path.exists(base_iri):\n iri = os.path.abspath(base_iri)\n elif os.path.exists(base_iri + \".ttl\"):\n iri = os.path.abspath(base_iri + \".ttl\")\n elif os.path.exists(base_iri + \".owl\"):\n iri = os.path.abspath(base_iri + \".owl\")\n else:\n iri = base_iri\n\n if iri[-1] not in \"/#\":\n iri += \"#\"\n\n if OntologyClass is None:\n OntologyClass = Ontology\n\n onto = OntologyClass(self, iri)\n\n if label_annotations:\n onto.label_annotations = list(label_annotations)\n\n return onto\n\n def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n ):\n # pylint: disable=invalid-name\n \"\"\"Returns all triples unabbreviated.\n\n If any of the `subject`, `predicate` or `obj` arguments are given,\n only matching triples will be returned.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.World.get_ontology","title":"get_ontology(self, base_iri='emmo-inferred', OntologyClass=None, label_annotations=None)
","text":"Returns a new Ontology from base_iri
.
Parameters:
Name Type Description Defaultbase_iri
str
The base IRI of the ontology. May be one of: - valid URL (possible excluding final .owl or .ttl) - file name (possible excluding final .owl or .ttl) - \"emmo\": load latest version of asserted EMMO - \"emmo-inferred\": load latest version of inferred EMMO (default) - \"emmo-development\": load latest inferred development version of EMMO. Until first stable release emmo-inferred and emmo-development will be the same.
'emmo-inferred'
OntologyClass
owlready2.Ontology
If given and base_iri
doesn't correspond to an existing ontology, a new ontology is created of this Ontology subclass. Defaults to ontopy.Ontology
.
None
label_annotations
Sequence
Sequence of label IRIs used for accessing entities in the ontology given that they are in the ontology. Label IRIs not in the ontology will need to be added to ontologies in order to be accessible. Defaults to DEFAULT_LABEL_ANNOTATIONS if set to None.
None
Source code in ontopy/ontology.py
def get_ontology(\n self,\n base_iri: str = \"emmo-inferred\",\n OntologyClass: \"owlready2.Ontology\" = None,\n label_annotations: \"Sequence\" = None,\n) -> \"Ontology\":\n # pylint: disable=too-many-branches\n \"\"\"Returns a new Ontology from `base_iri`.\n\n Arguments:\n base_iri: The base IRI of the ontology. May be one of:\n - valid URL (possible excluding final .owl or .ttl)\n - file name (possible excluding final .owl or .ttl)\n - \"emmo\": load latest version of asserted EMMO\n - \"emmo-inferred\": load latest version of inferred EMMO\n (default)\n - \"emmo-development\": load latest inferred development\n version of EMMO. Until first stable release\n emmo-inferred and emmo-development will be the same.\n OntologyClass: If given and `base_iri` doesn't correspond\n to an existing ontology, a new ontology is created of\n this Ontology subclass. Defaults to `ontopy.Ontology`.\n label_annotations: Sequence of label IRIs used for accessing\n entities in the ontology given that they are in the ontology.\n Label IRIs not in the ontology will need to be added to\n ontologies in order to be accessible.\n Defaults to DEFAULT_LABEL_ANNOTATIONS if set to None.\n \"\"\"\n base_iri = base_iri.as_uri() if isinstance(base_iri, Path) else base_iri\n\n if base_iri == \"emmo\":\n base_iri = (\n \"http://emmo-repo.github.io/versions/1.0.0-beta4/emmo.ttl\"\n )\n elif base_iri == \"emmo-inferred\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta4/\"\n \"emmo-inferred.ttl\"\n )\n elif base_iri == \"emmo-development\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta5/\"\n \"emmo-inferred.ttl\"\n )\n\n if base_iri in self.ontologies:\n onto = self.ontologies[base_iri]\n elif base_iri + \"#\" in self.ontologies:\n onto = self.ontologies[base_iri + \"#\"]\n elif base_iri + \"/\" in self.ontologies:\n onto = self.ontologies[base_iri + \"/\"]\n else:\n if os.path.exists(base_iri):\n iri = os.path.abspath(base_iri)\n elif os.path.exists(base_iri + \".ttl\"):\n iri = os.path.abspath(base_iri + \".ttl\")\n elif os.path.exists(base_iri + \".owl\"):\n iri = os.path.abspath(base_iri + \".owl\")\n else:\n iri = base_iri\n\n if iri[-1] not in \"/#\":\n iri += \"#\"\n\n if OntologyClass is None:\n OntologyClass = Ontology\n\n onto = OntologyClass(self, iri)\n\n if label_annotations:\n onto.label_annotations = list(label_annotations)\n\n return onto\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.World.get_unabbreviated_triples","title":"get_unabbreviated_triples(self, subject=None, predicate=None, obj=None, blank=None)
","text":"Returns all triples unabbreviated.
If any of the subject
, predicate
or obj
arguments are given, only matching triples will be returned.
If blank
is given, it will be used to represent blank nodes.
ontopy/ontology.py
def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n):\n # pylint: disable=invalid-name\n \"\"\"Returns all triples unabbreviated.\n\n If any of the `subject`, `predicate` or `obj` arguments are given,\n only matching triples will be returned.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.flatten","title":"flatten(items)
","text":"Yield items from any nested iterable.
Source code inontopy/ontology.py
def flatten(items):\n \"\"\"Yield items from any nested iterable.\"\"\"\n for item in items:\n if isinstance(item, Iterable) and not isinstance(item, (str, bytes)):\n yield from flatten(item)\n else:\n yield item\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.get_ontology","title":"get_ontology(*args, **kwargs)
","text":"Returns a new Ontology from base_iri
.
This is a convenient function for calling World.get_ontology().
Source code inontopy/ontology.py
def get_ontology(*args, **kwargs):\n \"\"\"Returns a new Ontology from `base_iri`.\n\n This is a convenient function for calling World.get_ontology().\"\"\"\n return World().get_ontology(*args, **kwargs)\n
"},{"location":"api_reference/ontopy/patch/","title":"patch","text":"This module injects some additional methods into owlready2 classes.
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.disjoint_with","title":"disjoint_with(self, reduce=False)
","text":"Returns a generator with all classes that are disjoint with self
.
If reduce
is True
, all classes that are a descendant of another class will be excluded.
ontopy/patch.py
def disjoint_with(self, reduce=False):\n \"\"\"Returns a generator with all classes that are disjoint with `self`.\n\n If `reduce` is `True`, all classes that are a descendant of another class\n will be excluded.\n \"\"\"\n if reduce:\n disjoint_set = set(self.disjoint_with())\n for entity in disjoint_set.copy():\n disjoint_set.difference_update(\n entity.descendants(include_self=False)\n )\n yield from disjoint_set\n else:\n for disjoint in self.disjoints():\n for entity in disjoint.entities:\n if entity is not self:\n yield entity\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_annotations","title":"get_annotations(self, all=False, imported=True)
","text":"Returns a dict with non-empty annotations.
If all
is True
, also annotations with no value are included.
If imported
is True
, also include annotations defined in imported ontologies.
ontopy/patch.py
def get_annotations(\n self, all=False, imported=True\n): # pylint: disable=redefined-builtin\n \"\"\"Returns a dict with non-empty annotations.\n\n If `all` is `True`, also annotations with no value are included.\n\n If `imported` is `True`, also include annotations defined in imported\n ontologies.\n \"\"\"\n onto = self.namespace.ontology\n\n def extend(key, values):\n \"\"\"Extend annotations with a sequence of values.\"\"\"\n if key in annotations:\n annotations[key].extend(values)\n else:\n annotations[key] = values\n\n annotations = {\n str(get_preferred_label(a)): a._get_values_for_class(self)\n for a in onto.annotation_properties(imported=imported)\n }\n extend(\"comment\", self.comment)\n extend(\"label\", self.label)\n if all:\n return annotations\n return {key: value for key, value in annotations.items() if value}\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_indirect_is_a","title":"get_indirect_is_a(self, skip_classes=True)
","text":"Returns the set of all isSubclassOf relations of self and its ancestors.
If skip_classes
is True
, indirect classes are not included in the returned set.
ontopy/patch.py
def get_indirect_is_a(self, skip_classes=True):\n \"\"\"Returns the set of all isSubclassOf relations of self and its ancestors.\n\n If `skip_classes` is `True`, indirect classes are not included in the\n returned set.\n \"\"\"\n subclass_relations = set()\n for entity in reversed(self.mro()):\n for attr in \"is_a\", \"equivalent_to\":\n if hasattr(entity, attr):\n lst = getattr(entity, attr)\n if skip_classes:\n subclass_relations.update(\n r\n for r in lst\n if not isinstance(r, owlready2.ThingClass)\n )\n else:\n subclass_relations.update(lst)\n\n subclass_relations.update(self.is_a)\n return subclass_relations\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_parents","title":"get_parents(self, strict=False)
","text":"Returns a list of all parents.
If strict
is True
, parents that are parents of other parents are excluded.
ontopy/patch.py
def get_parents(self, strict=False):\n \"\"\"Returns a list of all parents.\n\n If `strict` is `True`, parents that are parents of other parents are\n excluded.\n \"\"\"\n if strict:\n parents = self.get_parents()\n for entity in parents.copy():\n parents.difference_update(entity.ancestors(include_self=False))\n return parents\n if isinstance(self, ThingClass):\n return {cls for cls in self.is_a if isinstance(cls, ThingClass)}\n if isinstance(self, owlready2.ObjectPropertyClass):\n return {\n cls\n for cls in self.is_a\n if isinstance(cls, owlready2.ObjectPropertyClass)\n }\n raise EMMOntoPyException(\n \"self has no parents - this should not be possible!\"\n )\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_preferred_label","title":"get_preferred_label(self)
","text":"Returns the preferred label as a string (not list).
The following heuristics is used: - if prefLabel annotation property exists, returns the first prefLabel - if label annotation property exists, returns the first label - otherwise return the name
Source code inontopy/patch.py
def get_preferred_label(self):\n \"\"\"Returns the preferred label as a string (not list).\n\n The following heuristics is used:\n - if prefLabel annotation property exists, returns the first prefLabel\n - if label annotation property exists, returns the first label\n - otherwise return the name\n \"\"\"\n if hasattr(self, \"prefLabel\") and self.prefLabel:\n return self.prefLabel[0]\n if hasattr(self, \"label\") and self.label:\n return self.label.first()\n return self.name\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_typename","title":"get_typename(self)
","text":"Get restriction type label/name.
Source code inontopy/patch.py
def get_typename(self):\n \"\"\"Get restriction type label/name.\"\"\"\n return owlready2.class_construct._restriction_type_2_label[self.type]\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.has","title":"has(self, name)
","text":"Returns true if name
ontopy/patch.py
def has(self, name):\n \"\"\"Returns true if `name`\"\"\"\n return name in set(self.keys())\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.items","title":"items(self)
","text":"Return a generator over annotation property (name, value_list) pairs associates with this ontology.
Source code inontopy/patch.py
def items(self):\n \"\"\"Return a generator over annotation property (name, value_list)\n pairs associates with this ontology.\"\"\"\n namespace = self.namespace\n for annotation in namespace.annotation_properties():\n if namespace._has_data_triple_spod(\n s=namespace.storid, p=annotation.storid\n ):\n yield annotation, getattr(self, annotation.name)\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.keys","title":"keys(self)
","text":"Return a generator over annotation property names associated with this ontology.
Source code inontopy/patch.py
def keys(self):\n \"\"\"Return a generator over annotation property names associated\n with this ontology.\"\"\"\n namespace = self.namespace\n for annotation in namespace.annotation_properties():\n if namespace._has_data_triple_spod(\n s=namespace.storid, p=annotation.storid\n ):\n yield annotation\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.namespace_init","title":"namespace_init(self, world_or_ontology, base_iri, name=None)
","text":"init function for the Namespace
class.
ontopy/patch.py
def namespace_init(self, world_or_ontology, base_iri, name=None):\n \"\"\"__init__ function for the `Namespace` class.\"\"\"\n orig_namespace_init(self, world_or_ontology, base_iri, name)\n if self.name.endswith(\".ttl\"):\n self.name = self.name[:-4]\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.render_func","title":"render_func(entity)
","text":"Improve default rendering of entities.
Source code inontopy/patch.py
def render_func(entity):\n \"\"\"Improve default rendering of entities.\"\"\"\n if hasattr(entity, \"prefLabel\") and entity.prefLabel:\n name = entity.prefLabel[0]\n elif hasattr(entity, \"label\") and entity.label:\n name = entity.label[0]\n elif hasattr(entity, \"altLabel\") and entity.altLabel:\n name = entity.altLabel[0]\n else:\n name = entity.name\n return f\"{entity.namespace.name}.{name}\"\n
"},{"location":"api_reference/ontopy/testutils/","title":"testutils","text":"Module primarly intended to be imported by tests.
It defines some directories and some utility functions that can be used with and without conftest.
"},{"location":"api_reference/ontopy/testutils/#ontopy.testutils.get_tool_module","title":"get_tool_module(name)
","text":"Imports and returns the module for the EMMOntoPy tool corresponding to name
.
ontopy/testutils.py
def get_tool_module(name):\n \"\"\"Imports and returns the module for the EMMOntoPy tool\n corresponding to `name`.\"\"\"\n if str(toolsdir) not in sys.path:\n sys.path.append(str(toolsdir))\n\n # For Python 3.4+\n spec = spec_from_loader(name, SourceFileLoader(name, str(toolsdir / name)))\n module = module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n
"},{"location":"api_reference/ontopy/utils/","title":"utils","text":"Some generic utility functions.
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.AmbiguousLabelError","title":" AmbiguousLabelError (LookupError, AttributeError, EMMOntoPyException)
","text":"Error raised when a label is ambiguous.
Source code inontopy/utils.py
class AmbiguousLabelError(LookupError, AttributeError, EMMOntoPyException):\n \"\"\"Error raised when a label is ambiguous.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.EMMOntoPyException","title":" EMMOntoPyException (Exception)
","text":"A BaseException class for EMMOntoPy
Source code inontopy/utils.py
class EMMOntoPyException(Exception):\n \"\"\"A BaseException class for EMMOntoPy\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.EMMOntoPyWarning","title":" EMMOntoPyWarning (Warning)
","text":"A BaseWarning class for EMMOntoPy
Source code inontopy/utils.py
class EMMOntoPyWarning(Warning):\n \"\"\"A BaseWarning class for EMMOntoPy\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.EntityClassDefinitionError","title":" EntityClassDefinitionError (EMMOntoPyException)
","text":"Error in ThingClass definition.
Source code inontopy/utils.py
class EntityClassDefinitionError(EMMOntoPyException):\n \"\"\"Error in ThingClass definition.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.IncompatibleVersion","title":" IncompatibleVersion (EMMOntoPyWarning)
","text":"An installed dependency version may be incompatible with a functionality of this package - or rather an outcome of a functionality. This is not critical, hence this is only a warning.
Source code inontopy/utils.py
class IncompatibleVersion(EMMOntoPyWarning):\n \"\"\"An installed dependency version may be incompatible with a functionality\n of this package - or rather an outcome of a functionality.\n This is not critical, hence this is only a warning.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.IndividualWarning","title":" IndividualWarning (EMMOntoPyWarning)
","text":"A warning related to an individual, e.g. punning.
Source code inontopy/utils.py
class IndividualWarning(EMMOntoPyWarning):\n \"\"\"A warning related to an individual, e.g. punning.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.LabelDefinitionError","title":" LabelDefinitionError (EMMOntoPyException)
","text":"Error in label definition.
Source code inontopy/utils.py
class LabelDefinitionError(EMMOntoPyException):\n \"\"\"Error in label definition.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.NoSuchLabelError","title":" NoSuchLabelError (LookupError, AttributeError, EMMOntoPyException)
","text":"Error raised when a label cannot be found.
Source code inontopy/utils.py
class NoSuchLabelError(LookupError, AttributeError, EMMOntoPyException):\n \"\"\"Error raised when a label cannot be found.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.ReadCatalogError","title":" ReadCatalogError (OSError)
","text":"Error reading catalog file.
Source code inontopy/utils.py
class ReadCatalogError(IOError):\n \"\"\"Error reading catalog file.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.UnknownVersion","title":" UnknownVersion (EMMOntoPyException)
","text":"Cannot retrieve version from a package.
Source code inontopy/utils.py
class UnknownVersion(EMMOntoPyException):\n \"\"\"Cannot retrieve version from a package.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.annotate_source","title":"annotate_source(onto, imported=True)
","text":"Annotate all entities with the base IRI of the ontology using rdfs:isDefinedBy
annotations.
If imported
is true, all entities in imported sub-ontologies will also be annotated.
This is contextual information that is otherwise lost when the ontology is squashed and/or inferred.
Source code inontopy/utils.py
def annotate_source(onto, imported=True):\n \"\"\"Annotate all entities with the base IRI of the ontology using\n `rdfs:isDefinedBy` annotations.\n\n If `imported` is true, all entities in imported sub-ontologies will\n also be annotated.\n\n This is contextual information that is otherwise lost when the ontology\n is squashed and/or inferred.\n \"\"\"\n source = onto._abbreviate(\n \"http://www.w3.org/2000/01/rdf-schema#isDefinedBy\"\n )\n for entity in onto.get_entities(imported=imported):\n triple = (\n entity.storid,\n source,\n onto._abbreviate(entity.namespace.ontology.base_iri),\n )\n if not onto._has_obj_triple_spo(*triple):\n onto._add_obj_triple_spo(*triple)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.asstring","title":"asstring(expr, link='{label}', recursion_depth=0, exclude_object=False, ontology=None)
","text":"Returns a string representation of expr
.
Parameters:
Name Type Description Defaultexpr
The entity, restriction or a logical expression or these to represent.
requiredlink
A template for links. May contain the following variables: - {iri}: The full IRI of the concept. - {name}: Name-part of IRI. - {ref}: \"#{name}\" if the base iri of hte ontology has the same root as {iri}, otherwise \"{iri}\". - {label}: The label of the concept. - {lowerlabel}: The label of the concept in lower case and with spaces replaced with hyphens.
'{label}'
recursion_depth
Recursion depth. Only intended for internal use.
0
exclude_object
If true, the object will be excluded in restrictions.
False
ontology
Ontology object.
None
Returns:
Type Descriptionstr
String representation of expr
.
ontopy/utils.py
def asstring(\n expr,\n link=\"{label}\",\n recursion_depth=0,\n exclude_object=False,\n ontology=None,\n) -> str:\n \"\"\"Returns a string representation of `expr`.\n\n Arguments:\n expr: The entity, restriction or a logical expression or these\n to represent.\n link: A template for links. May contain the following variables:\n - {iri}: The full IRI of the concept.\n - {name}: Name-part of IRI.\n - {ref}: \"#{name}\" if the base iri of hte ontology has the same\n root as {iri}, otherwise \"{iri}\".\n - {label}: The label of the concept.\n - {lowerlabel}: The label of the concept in lower case and with\n spaces replaced with hyphens.\n recursion_depth: Recursion depth. Only intended for internal use.\n exclude_object: If true, the object will be excluded in restrictions.\n ontology: Ontology object.\n\n Returns:\n String representation of `expr`.\n \"\"\"\n # pylint: disable=too-many-return-statements,too-many-branches,too-many-statements\n if ontology is None:\n ontology = expr.ontology\n\n def fmt(entity):\n \"\"\"Returns the formatted label of an entity.\"\"\"\n if isinstance(entity, str):\n if ontology and ontology.world[entity]:\n iri = ontology.world[entity].iri\n elif (\n ontology\n and re.match(\"^[a-zA-Z0-9_+-]+$\", entity)\n and entity in ontology\n ):\n iri = ontology[entity].iri\n else:\n # This may not be a valid IRI, but the best we can do\n iri = entity\n label = entity\n else:\n iri = entity.iri\n label = get_label(entity)\n name = getiriname(iri)\n start = iri.split(\"#\", 1)[0] if \"#\" in iri else iri.rsplit(\"/\", 1)[0]\n ref = f\"#{name}\" if ontology.base_iri.startswith(start) else iri\n return link.format(\n entity=entity,\n name=name,\n ref=ref,\n iri=iri,\n label=label,\n lowerlabel=label.lower().replace(\" \", \"-\"),\n )\n\n if isinstance(expr, str):\n # return link.format(name=expr)\n return fmt(expr)\n if isinstance(expr, owlready2.Restriction):\n rlabel = owlready2.class_construct._restriction_type_2_label[expr.type]\n\n if isinstance(\n expr.property,\n (owlready2.ObjectPropertyClass, owlready2.DataPropertyClass),\n ):\n res = fmt(expr.property)\n elif isinstance(expr.property, owlready2.Inverse):\n string = asstring(\n expr.property.property,\n link,\n recursion_depth + 1,\n ontology=ontology,\n )\n res = f\"Inverse({string})\"\n else:\n print(\n f\"*** WARNING: unknown restriction property: {expr.property!r}\"\n )\n res = fmt(expr.property)\n\n if not rlabel:\n pass\n elif expr.type in (owlready2.MIN, owlready2.MAX, owlready2.EXACTLY):\n res += f\" {rlabel} {expr.cardinality}\"\n elif expr.type in (\n owlready2.SOME,\n owlready2.ONLY,\n owlready2.VALUE,\n owlready2.HAS_SELF,\n ):\n res += f\" {rlabel}\"\n else:\n print(\"*** WARNING: unknown relation\", expr, rlabel)\n res += f\" {rlabel}\"\n\n if not exclude_object:\n string = asstring(\n expr.value, link, recursion_depth + 1, ontology=ontology\n )\n res += (\n f\" {string!r}\" if isinstance(expr.value, str) else f\" {string}\"\n )\n return res\n if isinstance(expr, owlready2.Or):\n res = \" or \".join(\n [\n asstring(c, link, recursion_depth + 1, ontology=ontology)\n for c in expr.Classes\n ]\n )\n return res if recursion_depth == 0 else f\"({res})\"\n if isinstance(expr, owlready2.And):\n res = \" and \".join(\n [\n asstring(c, link, recursion_depth + 1, ontology=ontology)\n for c in expr.Classes\n ]\n )\n return res if recursion_depth == 0 else f\"({res})\"\n if isinstance(expr, owlready2.Not):\n string = asstring(\n expr.Class, link, recursion_depth + 1, ontology=ontology\n )\n return f\"not {string}\"\n if isinstance(expr, owlready2.ThingClass):\n return fmt(expr)\n if isinstance(expr, owlready2.PropertyClass):\n return fmt(expr)\n if isinstance(expr, owlready2.Thing): # instance (individual)\n return fmt(expr)\n if isinstance(expr, owlready2.class_construct.Inverse):\n return f\"inverse({fmt(expr.property)})\"\n if isinstance(expr, owlready2.disjoint.AllDisjoint):\n return fmt(expr)\n\n if isinstance(expr, (bool, int, float)):\n return repr(expr)\n # Check for subclasses\n if inspect.isclass(expr):\n if issubclass(expr, (bool, int, float, str)):\n return fmt(expr.__class__.__name__)\n if issubclass(expr, datetime.date):\n return \"date\"\n if issubclass(expr, datetime.time):\n return \"datetime\"\n if issubclass(expr, datetime.datetime):\n return \"datetime\"\n\n raise RuntimeError(f\"Unknown expression: {expr!r} (type: {type(expr)!r})\")\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.camelsplit","title":"camelsplit(string)
","text":"Splits CamelCase string before upper case letters (except if there is a sequence of upper case letters).
Source code inontopy/utils.py
def camelsplit(string):\n \"\"\"Splits CamelCase string before upper case letters (except\n if there is a sequence of upper case letters).\"\"\"\n if len(string) < 2:\n return string\n result = []\n prev_lower = False\n prev_isspace = True\n char = string[0]\n for next_char in string[1:]:\n if (not prev_isspace and char.isupper() and next_char.islower()) or (\n prev_lower and char.isupper()\n ):\n result.append(\" \")\n result.append(char)\n prev_lower = char.islower()\n prev_isspace = char.isspace()\n char = next_char\n result.append(char)\n return \"\".join(result)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.convert_imported","title":"convert_imported(input_ontology, output_ontology, input_format=None, output_format='xml', url_from_catalog=None, catalog_file='catalog-v001.xml')
","text":"Convert imported ontologies.
Store the output in a directory structure matching the source files. This require catalog file(s) to be present.
Warning
To convert to Turtle (.ttl
) format, you must have installed rdflib>=6.0.0
. See Known issues for more information.
Parameters:
Name Type Description Defaultinput_ontology
Union[Path, str]
input ontology file name
requiredoutput_ontology
Union[Path, str]
output ontology file path. The directory part of output
will be the root of the generated directory structure
input_format
Optional[str]
input format. The default is to infer from input_ontology
None
output_format
str
output format. The default is to infer from output_ontology
'xml'
url_from_catalog
Optional[bool]
Whether to read urls form catalog file. If False, the catalog file will be used if it exists.
None
catalog_file
str
name of catalog file, that maps ontology IRIs to local file names
'catalog-v001.xml'
Source code in ontopy/utils.py
def convert_imported( # pylint: disable=too-many-arguments,too-many-locals\n input_ontology: \"Union[Path, str]\",\n output_ontology: \"Union[Path, str]\",\n input_format: \"Optional[str]\" = None,\n output_format: str = \"xml\",\n url_from_catalog: \"Optional[bool]\" = None,\n catalog_file: str = \"catalog-v001.xml\",\n):\n \"\"\"Convert imported ontologies.\n\n Store the output in a directory structure matching the source\n files. This require catalog file(s) to be present.\n\n Warning:\n To convert to Turtle (`.ttl`) format, you must have installed\n `rdflib>=6.0.0`. See [Known issues](../../../#known-issues) for\n more information.\n\n Args:\n input_ontology: input ontology file name\n output_ontology: output ontology file path. The directory part of\n `output` will be the root of the generated directory structure\n input_format: input format. The default is to infer from\n `input_ontology`\n output_format: output format. The default is to infer from\n `output_ontology`\n url_from_catalog: Whether to read urls form catalog file.\n If False, the catalog file will be used if it exists.\n catalog_file: name of catalog file, that maps ontology IRIs to\n local file names\n \"\"\"\n inroot = os.path.dirname(os.path.abspath(input_ontology))\n outroot = os.path.dirname(os.path.abspath(output_ontology))\n outext = os.path.splitext(output_ontology)[1]\n\n if url_from_catalog is None:\n url_from_catalog = os.path.exists(os.path.join(inroot, catalog_file))\n\n if url_from_catalog:\n iris, dirs = read_catalog(\n inroot, catalog_file=catalog_file, recursive=True, return_paths=True\n )\n\n # Create output dirs and copy catalog files\n for indir in dirs:\n outdir = os.path.normpath(\n os.path.join(outroot, os.path.relpath(indir, inroot))\n )\n if not os.path.exists(outdir):\n os.makedirs(outdir)\n with open(\n os.path.join(indir, catalog_file), mode=\"rt\", encoding=\"utf8\"\n ) as handle:\n content = handle.read()\n for path in iris.values():\n newpath = os.path.splitext(path)[0] + outext\n content = content.replace(\n os.path.basename(path), os.path.basename(newpath)\n )\n with open(\n os.path.join(outdir, catalog_file), mode=\"wt\", encoding=\"utf8\"\n ) as handle:\n handle.write(content)\n else:\n iris = {}\n\n outpaths = set()\n\n def recur(graph, outext):\n for imported in graph.objects(\n predicate=URIRef(\"http://www.w3.org/2002/07/owl#imports\")\n ):\n inpath = iris.get(str(imported), str(imported))\n if inpath.startswith((\"http://\", \"https://\", \"ftp://\")):\n outpath = os.path.join(outroot, inpath.split(\"/\")[-1])\n else:\n outpath = os.path.join(outroot, os.path.relpath(inpath, inroot))\n outpath = os.path.splitext(os.path.normpath(outpath))[0] + outext\n if outpath not in outpaths:\n outpaths.add(outpath)\n fmt = (\n input_format\n if input_format\n else guess_format(inpath, fmap=FMAP)\n )\n new_graph = Graph()\n new_graph.parse(iris.get(inpath, inpath), format=fmt)\n new_graph.serialize(destination=outpath, format=output_format)\n recur(new_graph, outext)\n\n # Write output files\n fmt = (\n input_format\n if input_format\n else guess_format(input_ontology, fmap=FMAP)\n )\n\n if not _validate_installed_version(\n package=\"rdflib\", min_version=\"6.0.0\"\n ) and (output_format == FMAP.get(\"ttl\", \"\") or outext == \"ttl\"):\n from rdflib import ( # pylint: disable=import-outside-toplevel\n __version__ as __rdflib_version__,\n )\n\n warnings.warn(\n IncompatibleVersion(\n \"To correctly convert to Turtle format, rdflib must be \"\n \"version 6.0.0 or greater, however, the detected rdflib \"\n \"version used by your Python interpreter is \"\n f\"{__rdflib_version__!r}. For more information see the \"\n \"'Known issues' section of the README.\"\n )\n )\n\n graph = Graph()\n try:\n graph.parse(input_ontology, format=fmt)\n except PluginException as exc: # Add input_ontology to exception msg\n raise PluginException(\n f'Cannot load \"{input_ontology}\": {exc.msg}'\n ).with_traceback(exc.__traceback__)\n graph.serialize(destination=output_ontology, format=output_format)\n recur(graph, outext)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.copy_annotation","title":"copy_annotation(onto, src, dst)
","text":"In all classes and properties in onto
, copy annotation src
to dst
.
Parameters:
Name Type Description Defaultonto
Ontology to work on.
requiredsrc
Name of source annotation.
requireddst
Name or IRI of destination annotation. Use IRI if the destination annotation is not already in the ontology.
required Source code inontopy/utils.py
def copy_annotation(onto, src, dst):\n \"\"\"In all classes and properties in `onto`, copy annotation `src` to `dst`.\n\n Arguments:\n onto: Ontology to work on.\n src: Name of source annotation.\n dst: Name or IRI of destination annotation. Use IRI if the\n destination annotation is not already in the ontology.\n \"\"\"\n if onto.world[src]:\n src = onto.world[src]\n else:\n src = onto[src]\n\n if onto.world[dst]:\n dst = onto.world[dst]\n elif dst in onto:\n dst = onto[dst]\n else:\n if \"://\" not in dst:\n raise ValueError(\n \"new destination annotation property must be provided as \"\n \"a full IRI\"\n )\n name = min(dst.rsplit(\"#\")[-1], dst.rsplit(\"/\")[-1], key=len)\n iri = dst\n dst = onto.new_annotation_property(name, owlready2.AnnotationProperty)\n dst.iri = iri\n\n for e in onto.get_entities():\n new = getattr(e, src.name).first()\n if new and new not in getattr(e, dst.name):\n getattr(e, dst.name).append(new)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.directory_layout","title":"directory_layout(onto)
","text":"Analyse IRIs of imported ontologies and suggested a directory layout for saving recursively.
Parameters:
Name Type Description Defaultonto
Ontology to analyse.
requiredReturns:
Type Descriptionlayout
A dict mapping ontology objects to relative path names derived from the ontology IRIs. No file name extension are added.
Examples:
Assume that our ontology onto
has IRI ex:onto
. If it directly or indirectly imports ontologies with IRIs ex:A/ontoA
, ex:B/ontoB
and ex:A/C/ontoC
, this function will return the following dict:
{\n onto: \"onto\",\n ontoA: \"A/ontoA\",\n ontoB: \"B/ontoB\",\n ontoC: \"A/C/ontoC\",\n}\n
where ontoA
, ontoB
and ontoC
are imported Ontology objects.
ontopy/utils.py
def directory_layout(onto):\n \"\"\"Analyse IRIs of imported ontologies and suggested a directory\n layout for saving recursively.\n\n Arguments:\n onto: Ontology to analyse.\n\n Returns:\n layout: A dict mapping ontology objects to relative path names\n derived from the ontology IRIs. No file name extension are\n added.\n\n Example:\n Assume that our ontology `onto` has IRI `ex:onto`. If it directly\n or indirectly imports ontologies with IRIs `ex:A/ontoA`, `ex:B/ontoB`\n and `ex:A/C/ontoC`, this function will return the following dict:\n\n {\n onto: \"onto\",\n ontoA: \"A/ontoA\",\n ontoB: \"B/ontoB\",\n ontoC: \"A/C/ontoC\",\n }\n\n where `ontoA`, `ontoB` and `ontoC` are imported Ontology objects.\n \"\"\"\n all_imported = [\n imported.base_iri for imported in onto.indirectly_imported_ontologies()\n ]\n # get protocol and domain of all imported ontologies\n namespace_roots = set()\n for iri in all_imported:\n protocol, domain, *_ = urllib.parse.urlsplit(iri)\n namespace_roots.add(\"://\".join([protocol, domain]))\n\n def recur(o):\n baseiri = o.base_iri.rstrip(\"/#\")\n\n # Some heuristics here to reproduce the EMMO layout.\n # It might not apply to all ontologies, so maybe it should be\n # made optional? Alternatively, change EMMO ontology IRIs to\n # match the directory layout.\n emmolayout = (\n any(\n oo.base_iri.startswith(baseiri + \"/\")\n for oo in o.imported_ontologies\n )\n or o.base_iri == \"http://emmo.info/emmo/mereocausality#\"\n )\n\n layout[o] = (\n baseiri + \"/\" + os.path.basename(baseiri) if emmolayout else baseiri\n )\n for imported in o.imported_ontologies:\n if imported not in layout:\n recur(imported)\n\n layout = {}\n recur(onto)\n # Strip off initial common prefix from all paths\n if len(namespace_roots) == 1:\n prefix = os.path.commonprefix(list(layout.values()))\n for o, path in layout.items():\n layout[o] = path[len(prefix) :].lstrip(\"/\")\n else:\n for o, path in layout.items():\n for namespace_root in namespace_roots:\n if path.startswith(namespace_root):\n layout[o] = (\n urllib.parse.urlsplit(namespace_root)[1]\n + path[len(namespace_root) :]\n )\n\n return layout\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.english","title":"english(string)
","text":"Returns string
as an English location string.
ontopy/utils.py
def english(string):\n \"\"\"Returns `string` as an English location string.\"\"\"\n return owlready2.locstr(string, lang=\"en\")\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.get_format","title":"get_format(outfile, default, fmt=None)
","text":"Infer format from outfile and format.
Source code inontopy/utils.py
def get_format(outfile: str, default: str, fmt: str = None):\n \"\"\"Infer format from outfile and format.\"\"\"\n if fmt is None:\n fmt = os.path.splitext(outfile)[1]\n if not fmt:\n fmt = default\n return fmt.lstrip(\".\")\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.get_label","title":"get_label(entity)
","text":"Returns the label of an entity.
Source code inontopy/utils.py
def get_label(entity):\n \"\"\"Returns the label of an entity.\"\"\"\n # pylint: disable=too-many-return-statements\n if hasattr(entity, \"namespace\"):\n onto = entity.namespace.ontology\n if onto.label_annotations:\n for la in onto.label_annotations:\n try:\n label = entity[la]\n if label:\n return get_preferred_language(label)\n except (NoSuchLabelError, AttributeError, TypeError):\n continue\n if hasattr(entity, \"prefLabel\") and entity.prefLabel:\n return get_preferred_language(entity.prefLabel)\n if hasattr(entity, \"label\") and entity.label:\n return get_preferred_language(entity.label)\n if hasattr(entity, \"__name__\"):\n return entity.__name__\n if hasattr(entity, \"name\"):\n return str(entity.name)\n if isinstance(entity, str):\n return entity\n return repr(entity)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.get_preferred_language","title":"get_preferred_language(langstrings, lang=None)
","text":"Given a list of localised strings, return the one in language lang
. If lang
is not given, use ontopy.utils.PREFERRED_LANGUAGE
. If no one match is found, return the first one with no language tag or fallback to the first string.
The preferred language is stored as a module variable. You can change it with:
import ontopy.utils ontopy.utils.PREFERRED_LANGUAGE = \"en\"
Source code inontopy/utils.py
def get_preferred_language(langstrings: list, lang=None) -> str:\n \"\"\"Given a list of localised strings, return the one in language\n `lang`. If `lang` is not given, use\n `ontopy.utils.PREFERRED_LANGUAGE`. If no one match is found,\n return the first one with no language tag or fallback to the first\n string.\n\n The preferred language is stored as a module variable. You can\n change it with:\n\n >>> import ontopy.utils\n >>> ontopy.utils.PREFERRED_LANGUAGE = \"en\"\n\n \"\"\"\n if lang is None:\n lang = PREFERRED_LANGUAGE\n for langstr in langstrings:\n if hasattr(langstr, \"lang\") and langstr.lang == lang:\n return str(langstr)\n for langstr in langstrings:\n if not hasattr(langstr, \"lang\"):\n return langstr\n return str(langstrings[0])\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.getiriname","title":"getiriname(iri)
","text":"Return name part of an IRI.
The name part is what follows after the last slash or hash.
Source code inontopy/utils.py
def getiriname(iri):\n \"\"\"Return name part of an IRI.\n\n The name part is what follows after the last slash or hash.\n \"\"\"\n res = urllib.parse.urlparse(iri)\n return res.fragment if res.fragment else res.path.rsplit(\"/\", 1)[-1]\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.infer_version","title":"infer_version(iri, version_iri)
","text":"Infer version from IRI and versionIRI.
Source code inontopy/utils.py
def infer_version(iri, version_iri):\n \"\"\"Infer version from IRI and versionIRI.\"\"\"\n if str(version_iri[: len(iri)]) == str(iri):\n version = version_iri[len(iri) :].lstrip(\"/\")\n else:\n j = 0\n version_parts = []\n for i, char in enumerate(iri):\n while i + j < len(version_iri) and char != version_iri[i + j]:\n version_parts.append(version_iri[i + j])\n j += 1\n version = \"\".join(version_parts).lstrip(\"/\").rstrip(\"/#\")\n\n if \"/\" in version:\n raise ValueError(\n f\"version IRI {version_iri!r} is not consistent with base IRI \"\n f\"{iri!r}\"\n )\n return version\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.isinteractive","title":"isinteractive()
","text":"Returns true if we are running from an interactive interpreater, false otherwise.
Source code inontopy/utils.py
def isinteractive():\n \"\"\"Returns true if we are running from an interactive interpreater,\n false otherwise.\"\"\"\n return bool(\n hasattr(__builtins__, \"__IPYTHON__\")\n or sys.flags.interactive\n or hasattr(sys, \"ps1\")\n )\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.normalise_url","title":"normalise_url(url)
","text":"Returns url
in a normalised form.
ontopy/utils.py
def normalise_url(url):\n \"\"\"Returns `url` in a normalised form.\"\"\"\n splitted = urllib.parse.urlsplit(url)\n components = list(splitted)\n components[2] = os.path.normpath(splitted.path)\n return urllib.parse.urlunsplit(components)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.read_catalog","title":"read_catalog(uri, catalog_file='catalog-v001.xml', baseuri=None, recursive=False, relative_to=None, return_paths=False, visited_iris=None, visited_paths=None)
","text":"Reads a Prot\u00e8g\u00e8 catalog file and returns as a dict.
The returned dict maps the ontology IRI (name) to its actual location (URI). The location can be either an absolute file path or a HTTP, HTTPS or FTP web location.
uri
is a string locating the catalog file. It may be a http or https web location or a file path.
The catalog_file
argument spesifies the catalog file name and is used if path
is used when recursive
is true or when path
is a directory.
If baseuri
is not None, it will be used as the base URI for the mapped locations. Otherwise it defaults to uri
with its final component omitted.
If recursive
is true, catalog files in sub-folders are also read.
if relative_to
is given, the paths in the returned dict will be relative to this path.
If return_paths
is true, a set of directory paths to source files is returned in addition to the default dict.
The visited_uris
and visited_paths
arguments are only intended for internal use to avoid infinite recursions.
A ReadCatalogError is raised if the catalog file cannot be found.
Source code inontopy/utils.py
def read_catalog( # pylint: disable=too-many-locals,too-many-statements,too-many-arguments\n uri,\n catalog_file=\"catalog-v001.xml\",\n baseuri=None,\n recursive=False,\n relative_to=None,\n return_paths=False,\n visited_iris=None,\n visited_paths=None,\n):\n \"\"\"Reads a Prot\u00e8g\u00e8 catalog file and returns as a dict.\n\n The returned dict maps the ontology IRI (name) to its actual\n location (URI). The location can be either an absolute file path\n or a HTTP, HTTPS or FTP web location.\n\n `uri` is a string locating the catalog file. It may be a http or\n https web location or a file path.\n\n The `catalog_file` argument spesifies the catalog file name and is\n used if `path` is used when `recursive` is true or when `path` is a\n directory.\n\n If `baseuri` is not None, it will be used as the base URI for the\n mapped locations. Otherwise it defaults to `uri` with its final\n component omitted.\n\n If `recursive` is true, catalog files in sub-folders are also read.\n\n if `relative_to` is given, the paths in the returned dict will be\n relative to this path.\n\n If `return_paths` is true, a set of directory paths to source\n files is returned in addition to the default dict.\n\n The `visited_uris` and `visited_paths` arguments are only intended for\n internal use to avoid infinite recursions.\n\n A ReadCatalogError is raised if the catalog file cannot be found.\n \"\"\"\n # pylint: disable=too-many-branches\n\n # Protocols supported by urllib.request\n web_protocols = \"http://\", \"https://\", \"ftp://\"\n uri = str(uri) # in case uri is a pathlib.Path object\n iris = visited_iris if visited_iris else {}\n dirs = visited_paths if visited_paths else set()\n if uri in iris:\n return (iris, dirs) if return_paths else iris\n\n if uri.startswith(web_protocols):\n # Call read_catalog() recursively to ensure that the temporary\n # file is properly cleaned up\n with tempfile.TemporaryDirectory() as tmpdir:\n destfile = os.path.join(tmpdir, catalog_file)\n uris = { # maps uri to base\n uri: (baseuri if baseuri else os.path.dirname(uri)),\n f'{uri.rstrip(\"/\")}/{catalog_file}': (\n baseuri if baseuri else uri.rstrip(\"/\")\n ),\n f\"{os.path.dirname(uri)}/{catalog_file}\": (\n os.path.dirname(uri)\n ),\n }\n for url, base in uris.items():\n try:\n # The URL can only contain the schemes from `web_protocols`.\n _, msg = urllib.request.urlretrieve(url, destfile) # nosec\n except urllib.request.URLError:\n continue\n else:\n if \"Content-Length\" not in msg:\n continue\n\n return read_catalog(\n destfile,\n catalog_file=catalog_file,\n baseuri=baseuri if baseuri else base,\n recursive=recursive,\n return_paths=return_paths,\n visited_iris=iris,\n visited_paths=dirs,\n )\n raise ReadCatalogError(\n \"Cannot download catalog from URLs: \" + \", \".join(uris)\n )\n elif uri.startswith(\"file://\"):\n path = uri[7:]\n else:\n path = uri\n\n if os.path.isdir(path):\n dirname = os.path.abspath(path)\n filepath = os.path.join(dirname, catalog_file)\n else:\n catalog_file = os.path.basename(path)\n filepath = os.path.abspath(path)\n dirname = os.path.dirname(filepath)\n\n def gettag(entity):\n return entity.tag.rsplit(\"}\", 1)[-1]\n\n def load_catalog(filepath):\n if not os.path.exists(filepath):\n raise ReadCatalogError(\"No such catalog file: \" + filepath)\n dirname = os.path.normpath(os.path.dirname(filepath))\n dirs.add(baseuri if baseuri else dirname)\n xml = ET.parse(filepath)\n root = xml.getroot()\n if gettag(root) != \"catalog\":\n raise ReadCatalogError(\n f\"expected root tag of catalog file {filepath!r} to be \"\n '\"catalog\"'\n )\n for child in root:\n if gettag(child) == \"uri\":\n load_uri(child, dirname)\n elif gettag(child) == \"group\":\n for uri in child:\n load_uri(uri, dirname)\n\n def load_uri(uri, dirname):\n if gettag(uri) != \"uri\":\n raise ValueError(f\"{gettag(uri)!r} should be 'uri'.\")\n uri_as_str = uri.attrib[\"uri\"]\n if uri_as_str.startswith(web_protocols):\n url = uri_as_str\n else:\n uri_as_str = os.path.normpath(uri_as_str)\n if baseuri and baseuri.startswith(web_protocols):\n url = f\"{baseuri}/{uri_as_str}\"\n else:\n url = os.path.join(baseuri if baseuri else dirname, uri_as_str)\n\n iris.setdefault(uri.attrib[\"name\"], url)\n if recursive:\n directory = os.path.dirname(url)\n if directory not in dirs:\n catalog = os.path.join(directory, catalog_file)\n if catalog.startswith(web_protocols):\n iris_, dirs_ = read_catalog(\n catalog,\n catalog_file=catalog_file,\n baseuri=None,\n recursive=recursive,\n return_paths=True,\n visited_iris=iris,\n visited_paths=dirs,\n )\n iris.update(iris_)\n dirs.update(dirs_)\n else:\n load_catalog(catalog)\n\n load_catalog(filepath)\n\n if relative_to:\n for iri, path in iris.items():\n iris[iri] = os.path.relpath(path, relative_to)\n\n if return_paths:\n return iris, dirs\n return iris\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.rename_iris","title":"rename_iris(onto, annotation='prefLabel')
","text":"For IRIs with the given annotation, change the name of the entity to the value of the annotation. Also add an skos:exactMatch
annotation referring to the old IRI.
ontopy/utils.py
def rename_iris(onto, annotation=\"prefLabel\"):\n \"\"\"For IRIs with the given annotation, change the name of the entity\n to the value of the annotation. Also add an `skos:exactMatch`\n annotation referring to the old IRI.\n \"\"\"\n exactMatch = onto._abbreviate( # pylint:disable=invalid-name\n \"http://www.w3.org/2004/02/skos/core#exactMatch\"\n )\n for entity in onto.get_entities():\n if hasattr(entity, annotation) and getattr(entity, annotation):\n onto._add_data_triple_spod(\n entity.storid, exactMatch, entity.iri, \"\"\n )\n entity.name = getattr(entity, annotation).first()\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.write_catalog","title":"write_catalog(irimap, output='catalog-v001.xml', directory='.', relative_paths=True, append=False)
","text":"Write catalog file do disk.
Parameters:
Name Type Description Defaultirimap
dict
dict mapping ontology IRIs (name) to actual locations (URIs). It has the same format as the dict returned by read_catalog().
requiredoutput
Union[str, Path]
name of catalog file.
'catalog-v001.xml'
directory
Union[str, Path]
directory path to the catalog file. Only used if output
is a relative path.
'.'
relative_paths
bool
whether to write file paths inside the catalog as relative paths (instead of absolute paths).
True
append
bool
whether to append to a possible existing catalog file. If false, an existing file will be overwritten.
False
Source code in ontopy/utils.py
def write_catalog(\n irimap: dict,\n output: \"Union[str, Path]\" = \"catalog-v001.xml\",\n directory: \"Union[str, Path]\" = \".\",\n relative_paths: bool = True,\n append: bool = False,\n): # pylint: disable=redefined-builtin\n \"\"\"Write catalog file do disk.\n\n Args:\n irimap: dict mapping ontology IRIs (name) to actual locations\n (URIs). It has the same format as the dict returned by\n read_catalog().\n output: name of catalog file.\n directory: directory path to the catalog file. Only used if `output`\n is a relative path.\n relative_paths: whether to write file paths inside the catalog as\n relative paths (instead of absolute paths).\n append: whether to append to a possible existing catalog file.\n If false, an existing file will be overwritten.\n \"\"\"\n filename = Path(directory) / output\n\n if relative_paths:\n irimap = irimap.copy() # don't modify provided irimap\n for iri, path in irimap.items():\n if os.path.isabs(path):\n irimap[iri] = os.path.relpath(path, filename.parent)\n\n if filename.exists() and append:\n iris = read_catalog(filename)\n iris.update(irimap)\n irimap = iris\n\n res = [\n '<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>',\n '<catalog prefer=\"public\" '\n 'xmlns=\"urn:oasis:names:tc:entity:xmlns:xml:catalog\">',\n ' <group id=\"Folder Repository, directory=, recursive=true, '\n 'Auto-Update=false, version=2\" prefer=\"public\" xml:base=\"\">',\n ]\n for iri, path in irimap.items():\n res.append(f' <uri name=\"{iri}\" uri=\"{path}\"/>')\n res.append(\" </group>\")\n res.append(\"</catalog>\")\n with open(filename, \"wt\") as handle:\n handle.write(\"\\n\".join(res) + \"\\n\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/","title":"factppgraph","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph--ontopyfactpluspluswrapperfactppgraph","title":"ontopy.factpluspluswrapper.factppgraph
","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph","title":" FaCTPPGraph
","text":"Class for running the FaCT++ reasoner (using OwlApiInterface) and postprocessing the resulting inferred ontology.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph--parameters","title":"Parameters","text":"graph : owlapi.Graph instance The graph to be inferred.
Source code inontopy/factpluspluswrapper/factppgraph.py
class FaCTPPGraph:\n \"\"\"Class for running the FaCT++ reasoner (using OwlApiInterface) and\n postprocessing the resulting inferred ontology.\n\n Parameters\n ----------\n graph : owlapi.Graph instance\n The graph to be inferred.\n \"\"\"\n\n def __init__(self, graph):\n self.graph = graph\n self._inferred = None\n self._namespaces = None\n self._base_iri = None\n\n @property\n def inferred(self):\n \"\"\"The current inferred graph.\"\"\"\n if self._inferred is None:\n self._inferred = self.raw_inferred_graph()\n return self._inferred\n\n @property\n def base_iri(self):\n \"\"\"Base iri of inferred ontology.\"\"\"\n if self._base_iri is None:\n self._base_iri = URIRef(self.asserted_base_iri() + \"-inferred\")\n return self._base_iri\n\n @base_iri.setter\n def base_iri(self, value):\n \"\"\"Assign inferred base iri.\"\"\"\n self._base_iri = URIRef(value)\n\n @property\n def namespaces(self):\n \"\"\"Namespaces defined in the original graph.\"\"\"\n if self._namespaces is None:\n self._namespaces = dict(self.graph.namespaces()).copy()\n self._namespaces[\"\"] = self.base_iri\n return self._namespaces\n\n def asserted_base_iri(self):\n \"\"\"Returns the base iri or the original graph.\"\"\"\n return URIRef(dict(self.graph.namespaces()).get(\"\", \"\").rstrip(\"#/\"))\n\n def raw_inferred_graph(self):\n \"\"\"Returns the raw non-postprocessed inferred ontology as a rdflib\n graph.\"\"\"\n return OwlApiInterface().reason(self.graph)\n\n def inferred_graph(self):\n \"\"\"Returns the postprocessed inferred graph.\"\"\"\n self.add_base_annotations()\n self.set_namespace()\n self.clean_base()\n self.remove_nothing_is_nothing()\n self.clean_ancestors()\n return self.inferred\n\n def add_base_annotations(self):\n \"\"\"Copy base annotations from original graph to the inferred graph.\"\"\"\n base = self.base_iri\n inferred = self.inferred\n for _, predicate, obj in self.graph.triples(\n (self.asserted_base_iri(), None, None)\n ):\n if predicate == OWL.versionIRI:\n version = obj.rsplit(\"/\", 1)[-1]\n obj = URIRef(f\"{base}/{version}\")\n inferred.add((base, predicate, obj))\n\n def set_namespace(self):\n \"\"\"Override namespace of inferred graph with the namespace of the\n original graph.\n \"\"\"\n inferred = self.inferred\n for key, value in self.namespaces.items():\n inferred.namespace_manager.bind(\n key, value, override=True, replace=True\n )\n\n def clean_base(self):\n \"\"\"Remove all relations `s? a owl:Ontology` where `s?` is not\n `base_iri`.\n \"\"\"\n inferred = self.inferred\n for (\n subject,\n predicate,\n obj,\n ) in inferred.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n inferred.remove((subject, predicate, obj))\n inferred.add((self.base_iri, RDF.type, OWL.Ontology))\n\n def remove_nothing_is_nothing(self):\n \"\"\"Remove superfluid relation in inferred graph:\n\n owl:Nothing rdfs:subClassOf owl:Nothing\n \"\"\"\n triple = OWL.Nothing, RDFS.subClassOf, OWL.Nothing\n inferred = self.inferred\n if triple in inferred:\n inferred.remove(triple)\n\n def clean_ancestors(self):\n \"\"\"Remove redundant rdfs:subClassOf relations in inferred graph.\"\"\"\n inferred = self.inferred\n for ( # pylint: disable=too-many-nested-blocks\n subject\n ) in inferred.subjects(RDF.type, OWL.Class):\n if isinstance(subject, URIRef):\n parents = set(\n parent\n for parent in inferred.objects(subject, RDFS.subClassOf)\n if isinstance(parent, URIRef)\n )\n if len(parents) > 1:\n for parent in parents:\n ancestors = set(\n inferred.transitive_objects(parent, RDFS.subClassOf)\n )\n for entity in parents:\n if entity != parent and entity in ancestors:\n triple = subject, RDFS.subClassOf, entity\n if triple in inferred:\n inferred.remove(triple)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.base_iri","title":"base_iri
property
writable
","text":"Base iri of inferred ontology.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.inferred","title":"inferred
property
readonly
","text":"The current inferred graph.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.namespaces","title":"namespaces
property
readonly
","text":"Namespaces defined in the original graph.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.add_base_annotations","title":"add_base_annotations(self)
","text":"Copy base annotations from original graph to the inferred graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def add_base_annotations(self):\n \"\"\"Copy base annotations from original graph to the inferred graph.\"\"\"\n base = self.base_iri\n inferred = self.inferred\n for _, predicate, obj in self.graph.triples(\n (self.asserted_base_iri(), None, None)\n ):\n if predicate == OWL.versionIRI:\n version = obj.rsplit(\"/\", 1)[-1]\n obj = URIRef(f\"{base}/{version}\")\n inferred.add((base, predicate, obj))\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.asserted_base_iri","title":"asserted_base_iri(self)
","text":"Returns the base iri or the original graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def asserted_base_iri(self):\n \"\"\"Returns the base iri or the original graph.\"\"\"\n return URIRef(dict(self.graph.namespaces()).get(\"\", \"\").rstrip(\"#/\"))\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.clean_ancestors","title":"clean_ancestors(self)
","text":"Remove redundant rdfs:subClassOf relations in inferred graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def clean_ancestors(self):\n \"\"\"Remove redundant rdfs:subClassOf relations in inferred graph.\"\"\"\n inferred = self.inferred\n for ( # pylint: disable=too-many-nested-blocks\n subject\n ) in inferred.subjects(RDF.type, OWL.Class):\n if isinstance(subject, URIRef):\n parents = set(\n parent\n for parent in inferred.objects(subject, RDFS.subClassOf)\n if isinstance(parent, URIRef)\n )\n if len(parents) > 1:\n for parent in parents:\n ancestors = set(\n inferred.transitive_objects(parent, RDFS.subClassOf)\n )\n for entity in parents:\n if entity != parent and entity in ancestors:\n triple = subject, RDFS.subClassOf, entity\n if triple in inferred:\n inferred.remove(triple)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.clean_base","title":"clean_base(self)
","text":"Remove all relations s? a owl:Ontology
where s?
is not base_iri
.
ontopy/factpluspluswrapper/factppgraph.py
def clean_base(self):\n \"\"\"Remove all relations `s? a owl:Ontology` where `s?` is not\n `base_iri`.\n \"\"\"\n inferred = self.inferred\n for (\n subject,\n predicate,\n obj,\n ) in inferred.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n inferred.remove((subject, predicate, obj))\n inferred.add((self.base_iri, RDF.type, OWL.Ontology))\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.inferred_graph","title":"inferred_graph(self)
","text":"Returns the postprocessed inferred graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def inferred_graph(self):\n \"\"\"Returns the postprocessed inferred graph.\"\"\"\n self.add_base_annotations()\n self.set_namespace()\n self.clean_base()\n self.remove_nothing_is_nothing()\n self.clean_ancestors()\n return self.inferred\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.raw_inferred_graph","title":"raw_inferred_graph(self)
","text":"Returns the raw non-postprocessed inferred ontology as a rdflib graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def raw_inferred_graph(self):\n \"\"\"Returns the raw non-postprocessed inferred ontology as a rdflib\n graph.\"\"\"\n return OwlApiInterface().reason(self.graph)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.remove_nothing_is_nothing","title":"remove_nothing_is_nothing(self)
","text":"Remove superfluid relation in inferred graph:
owl:Nothing rdfs:subClassOf owl:Nothing
Source code inontopy/factpluspluswrapper/factppgraph.py
def remove_nothing_is_nothing(self):\n \"\"\"Remove superfluid relation in inferred graph:\n\n owl:Nothing rdfs:subClassOf owl:Nothing\n \"\"\"\n triple = OWL.Nothing, RDFS.subClassOf, OWL.Nothing\n inferred = self.inferred\n if triple in inferred:\n inferred.remove(triple)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.set_namespace","title":"set_namespace(self)
","text":"Override namespace of inferred graph with the namespace of the original graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def set_namespace(self):\n \"\"\"Override namespace of inferred graph with the namespace of the\n original graph.\n \"\"\"\n inferred = self.inferred\n for key, value in self.namespaces.items():\n inferred.namespace_manager.bind(\n key, value, override=True, replace=True\n )\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FactPPError","title":" FactPPError
","text":"Postprocessing error after reasoning with FaCT++.
Source code inontopy/factpluspluswrapper/factppgraph.py
class FactPPError:\n \"\"\"Postprocessing error after reasoning with FaCT++.\"\"\"\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/","title":"owlapi_interface","text":"Python interface to the FaCT++ Reasoner.
This module is copied from the SimPhoNy project.
Original author: Matthias Urban
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface","title":" OwlApiInterface
","text":"Interface to the FaCT++ reasoner via OWLAPI.
Source code inontopy/factpluspluswrapper/owlapi_interface.py
class OwlApiInterface:\n \"\"\"Interface to the FaCT++ reasoner via OWLAPI.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the interface.\"\"\"\n\n def reason(self, graph):\n \"\"\"Generate the inferred axioms for a given Graph.\n\n Args:\n graph (Graph): An rdflib graph to execute the reasoner on.\n\n \"\"\"\n with tempfile.NamedTemporaryFile(\"wt\") as tmpdir:\n graph.serialize(tmpdir.name, format=\"xml\")\n return self._run(tmpdir.name, command=\"--run-reasoner\")\n\n def reason_files(self, *owl_files):\n \"\"\"Merge the given owl and generate the inferred axioms.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--run-reasoner\")\n\n def merge_files(self, *owl_files):\n \"\"\"Merge the given owl files and its import closure.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--merge-only\")\n\n @staticmethod\n def _run(\n *owl_files, command, output_file=None, return_graph=True\n ) -> rdflib.Graph:\n \"\"\"Run the FaCT++ reasoner using a java command.\n\n Args:\n *owl_files (str): Path to the owl files to load.\n command (str): Either --run-reasoner or --merge-only\n output_file (str, optional): Where the output should be stored.\n Defaults to None.\n return_graph (bool, optional): Whether the result should be parsed\n and returned. Defaults to True.\n\n Returns:\n The reasoned result.\n\n \"\"\"\n java_base = os.path.abspath(\n os.path.join(os.path.dirname(__file__), \"java\")\n )\n cmd = (\n [\n \"java\",\n \"-cp\",\n java_base + \"/lib/jars/*\",\n \"-Djava.library.path=\" + java_base + \"/lib/so\",\n \"org.simphony.OntologyLoader\",\n ]\n + [command]\n + list(owl_files)\n )\n logger.info(\"Running Reasoner\")\n logger.debug(\"Command %s\", cmd)\n subprocess.run(cmd, check=True) # nosec\n\n graph = None\n if return_graph:\n graph = rdflib.Graph()\n graph.parse(RESULT_FILE)\n if output_file:\n os.rename(RESULT_FILE, output_file)\n else:\n os.remove(RESULT_FILE)\n return graph\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.__init__","title":"__init__(self)
special
","text":"Initialize the interface.
Source code inontopy/factpluspluswrapper/owlapi_interface.py
def __init__(self):\n \"\"\"Initialize the interface.\"\"\"\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.merge_files","title":"merge_files(self, *owl_files)
","text":"Merge the given owl files and its import closure.
Parameters:
Name Type Description Default*owl_files
os.path
The owl files two merge.
()
Source code in ontopy/factpluspluswrapper/owlapi_interface.py
def merge_files(self, *owl_files):\n \"\"\"Merge the given owl files and its import closure.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--merge-only\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.reason","title":"reason(self, graph)
","text":"Generate the inferred axioms for a given Graph.
Parameters:
Name Type Description Defaultgraph
Graph
An rdflib graph to execute the reasoner on.
required Source code inontopy/factpluspluswrapper/owlapi_interface.py
def reason(self, graph):\n \"\"\"Generate the inferred axioms for a given Graph.\n\n Args:\n graph (Graph): An rdflib graph to execute the reasoner on.\n\n \"\"\"\n with tempfile.NamedTemporaryFile(\"wt\") as tmpdir:\n graph.serialize(tmpdir.name, format=\"xml\")\n return self._run(tmpdir.name, command=\"--run-reasoner\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.reason_files","title":"reason_files(self, *owl_files)
","text":"Merge the given owl and generate the inferred axioms.
Parameters:
Name Type Description Default*owl_files
os.path
The owl files two merge.
()
Source code in ontopy/factpluspluswrapper/owlapi_interface.py
def reason_files(self, *owl_files):\n \"\"\"Merge the given owl and generate the inferred axioms.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--run-reasoner\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.reason_from_terminal","title":"reason_from_terminal()
","text":"Run the reasoner from terminal.
Source code inontopy/factpluspluswrapper/owlapi_interface.py
def reason_from_terminal():\n \"\"\"Run the reasoner from terminal.\"\"\"\n parser = argparse.ArgumentParser(\n description=\"Run the FaCT++ reasoner on the given OWL file. \"\n \"Catalog files are used to load the import closure. \"\n \"Then the reasoner is executed and the inferred triples are merged \"\n \"with the asserted ones. If multiple OWL files are given, they are \"\n \"merged beforehand\"\n )\n parser.add_argument(\n \"owl_file\", nargs=\"+\", help=\"OWL file(s) to run the reasoner on.\"\n )\n parser.add_argument(\"output_file\", help=\"Path to store inferred axioms to.\")\n\n args = parser.parse_args()\n OwlApiInterface()._run( # pylint: disable=protected-access\n *args.owl_file,\n command=\"--run-reasoner\",\n return_graph=False,\n output_file=args.output_file,\n )\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/","title":"sync_factpp","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/#ontopy.factpluspluswrapper.sync_factpp--ontopyfactpluspluswrappersyncfatpp","title":"ontopy.factpluspluswrapper.syncfatpp
","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/#ontopy.factpluspluswrapper.sync_factpp.sync_reasoner_factpp","title":"sync_reasoner_factpp(ontology_or_world=None, infer_property_values=False, debug=1)
","text":"Run FaCT++ reasoner and load the inferred relations back into the owlready2 triplestore.
"},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/#ontopy.factpluspluswrapper.sync_factpp.sync_reasoner_factpp--parameters","title":"Parameters","text":"ontology_or_world : None | Ontology instance | World instance | list Identifies the world to run the reasoner over. infer_property_values : bool Whether to also infer property values. debug : bool Whether to print debug info to standard output.
Source code inontopy/factpluspluswrapper/sync_factpp.py
def sync_reasoner_factpp(\n ontology_or_world=None, infer_property_values=False, debug=1\n):\n \"\"\"Run FaCT++ reasoner and load the inferred relations back into\n the owlready2 triplestore.\n\n Parameters\n ----------\n ontology_or_world : None | Ontology instance | World instance | list\n Identifies the world to run the reasoner over.\n infer_property_values : bool\n Whether to also infer property values.\n debug : bool\n Whether to print debug info to standard output.\n \"\"\"\n # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n if isinstance(ontology_or_world, World):\n world = ontology_or_world\n elif isinstance(ontology_or_world, Ontology):\n world = ontology_or_world.world\n elif isinstance(ontology_or_world, Sequence):\n world = ontology_or_world[0].world\n else:\n world = owlready2.default_world\n\n if isinstance(ontology_or_world, Ontology):\n ontology = ontology_or_world\n elif CURRENT_NAMESPACES.get():\n ontology = CURRENT_NAMESPACES.get()[-1].ontology\n else:\n ontology = world.get_ontology(_INFERRENCES_ONTOLOGY)\n\n locked = world.graph.has_write_lock()\n if locked:\n world.graph.release_write_lock() # Not needed during reasoning\n\n try:\n if debug:\n print(\"*** Prepare graph\")\n # Exclude owl:imports because they are not needed and can\n # cause trouble when loading the inferred ontology\n graph1 = rdflib.Graph()\n for subject, predicate, obj in world.as_rdflib_graph().triples(\n (None, None, None)\n ):\n if predicate != OWL.imports:\n graph1.add((subject, predicate, obj))\n\n if debug:\n print(\"*** Run FaCT++ reasoner (and postprocess)\")\n graph2 = FaCTPPGraph(graph1).inferred_graph()\n\n if debug:\n print(\"*** Load inferred ontology\")\n # Check all rdfs:subClassOf relations in the inferred graph and add\n # them to the world if they are missing\n new_parents = defaultdict(list)\n new_equivs = defaultdict(list)\n entity_2_type = {}\n\n for (\n subject,\n predicate,\n obj,\n ) in graph2.triples( # pylint: disable=not-an-iterable\n (None, None, None)\n ):\n if (\n isinstance(subject, URIRef)\n and predicate in OWL_2_TYPE\n and isinstance(obj, URIRef)\n ):\n s_storid = ontology._abbreviate(str(subject), False)\n p_storid = ontology._abbreviate(str(predicate), False)\n o_storid = ontology._abbreviate(str(obj), False)\n if (\n s_storid is not None\n and p_storid is not None\n and o_storid is not None\n ):\n if predicate in (\n RDFS.subClassOf,\n RDFS.subPropertyOf,\n RDF.type,\n ):\n new_parents[s_storid].append(o_storid)\n entity_2_type[s_storid] = OWL_2_TYPE[predicate]\n else:\n new_equivs[s_storid].append(o_storid)\n entity_2_type[s_storid] = OWL_2_TYPE[predicate]\n\n if infer_property_values:\n inferred_obj_relations = []\n # Hmm, does FaCT++ infer any property values?\n # If not, remove the `infer_property_values` keyword argument.\n raise NotImplementedError\n\n finally:\n if locked:\n world.graph.acquire_write_lock() # re-lock when applying results\n\n if debug:\n print(\"*** Applying reasoning results\")\n\n _apply_reasoning_results(\n world, ontology, debug, new_parents, new_equivs, entity_2_type\n )\n if infer_property_values:\n _apply_inferred_obj_relations(\n world, ontology, debug, inferred_obj_relations\n )\n
"},{"location":"demo/","title":"EMMO use cases","text":"This demo contains two use cases on how EMMO can be used to achieve vertical and horizontal interpoerability, respectivily.
Warning
This demonstration is still work in progress. Especially documentation is lacking.
"},{"location":"demo/#content","title":"Content","text":"Horizontal interoperability is about interoperability between different types of models and codes for a single material (i.e., one use case, multiple models).
The key here is to show how to map between EMMO (or an EMMO-based ontology) and another ontology (possible EMMO-based).
In this example we use a data-driven approach based on a C-implementation of SOFT1,2.
This is done in four steps:
Generate metadata from the EMMO-based user case ontology.
Implemented in the script step1_generate_metadata.py.
Define metadata for an application developed independently of EMMO.
In this case a metadata description of the ASE Atoms class 3 is created in atoms.json
.
Implemented in the script step2_define_metadata.py.
Instantiate the metadata defined defined in step 2 with an atomistic structure interface structure.
Implemented in the script step3_instantiate.py.
Map the atomistic interface structure from the application representation to the common EMMO-based representation.
Implemented in the script step4_map_instance.py.
Essentially, this demonstration shows how EMMO can be extended and how external data can be mapped into our extended ontology (serving as a common representational system).
"},{"location":"demo/horizontal/#requirements-for-running-the-user-case","title":"Requirements for running the user case","text":"In addition to emmo, this demo also requires:
Vertical interoperability is about interoperability across two or more granulaty levels.
In this use case we study the welded interface between an aluminium and a steel plate at three granularity levels. In this case, the granularity levels corresponds to three different length scales, that we here denote component, microstructure and atomistic scale.
"},{"location":"demo/vertical/#creating-an-emmo-based-user-case-ontology","title":"Creating an EMMO-based user case ontology","text":"The script define_ontology.py uses the Python API for EMMO to generate an application ontology extending EMMO with additional concepts needed to describe the data that is exchanged between scales. The user case ontology can then be visualised with the script plot_ontology.py.
"},{"location":"demo/vertical/#defining-the-needed-material-entities","title":"Defining the needed material entities","text":""},{"location":"demo/vertical/#assigning-properties-to-material-entities","title":"Assigning properties to material entities","text":"Note that we here also assign properties to e-bonded_atom
, even though e-bonded_atom
is defined in EMMO.
We choose here to consistently use SI units for all scales (even though at the atomistic scale units like \u00c5ngstr\u00f6m and electron volt are more commonly used).
"},{"location":"demo/vertical/#assigning-types-to-properties","title":"Assigning types to properties","text":"In order to be able to generate metadata and to describe the actual data transferred between scales, we also need to define types.
"},{"location":"demo/vertical/#the-new-application-ontology","title":"The new application-ontology","text":"The final plot shows the user case ontology in context of EMMO.
"},{"location":"developers/release-instructions/","title":"Steps for creating a new release","text":"Create a release on GitHub with a short release description.
Ensure you add a # <version number>
title to the description.
Set the tag to the version number prefixed with \"v\"
and title to the version number as explained above.
Ensure the GitHub Action CD workflows run as expected.
The workflow failed
If something is wrong and the workflow fails before publishing the package to PyPI, make sure to remove all traces of the release and tag, fix the bug, and try again.
If something is wrong and the workflow fails after publishing the package to PyPI: DO NOT REMOVE THE RELEASE OR TAG !
Deployment of the documentation should (in theory) be the only thing that has failed. This can be deployed manually using similar steps as in the workflow.
"},{"location":"developers/setup/","title":"Development environment","text":"This section outlines some suggestions as well as conventions used by the EMMOntoPy developers, which should be considered or followed if one wants to contribute to the package.
"},{"location":"developers/setup/#setup","title":"Setup","text":"Requirements
This section expects you to be running on a Unix-like system (e.g., Linux) with minimum Python 3.7.
"},{"location":"developers/setup/#virtual-environment","title":"Virtual environment","text":"Since development can be messy, it is good to separate the development environment from the rest of your system's environment.
To do this, you can use a virtual environment. There are a several different ways to create a virtual environment, but we recommend using either virtualenv
or venv
.
Virtual environment considerations
There are several different virtual environment setups, here we only address a very few.
A great resource for an overview can be found in this StackOverflow answer. However, note that in the end, it is very subjective on the solution one uses and one is not necessarily \"better\" than another.
virtualenv
(recommended)venv
To install virtualenv
+virtualenvwrapper
run:
$ pip install virtualenvwrapper\n
There is other setup, most of which only needs to be run once. For more information about this, see the virtualenvwrapper
documentation.
After successfully setting up virtualenv
through virtualenvwrapper
, you can create a new virtual environment:
$ mkproject -p python3.7 emmo-python\n
Note
If you do not have Python 3.7 installed (or instead want to use your system's default Python version), you can leave out the extra -p python3.7
argument. Or you can choose to use another version of Python by changing this argument to another (valid) python interpreter.
Then, if the virtual environment has not been activated automatically (you should see the name emmo-python
in a parenthesis in your console), you can run:
$ workon emmo-python\n
Tip
You can quickly see a list of all your virtual environments by writing workon
and pressing Tab twice.
To deactivate the virtual environment, returning to the system/global environment again, run:
(emmo-python) $ deactivate\n
venv
is a built-in package in Python, which works similar to virtualenv
, but with fewer capabilities.
To create a new virtual environment with venv
, first go to the directory, where you desire to keep your virtual environment. Then run the venv
module using the Python interpreter you wish to use in the virtual environment. For Python 3.7 this would look like the following:
$ python3.7 -m venv emmo-python\n
A folder with the name emmo-python
containing the environment is created.
To activate the environment run:
$ ./emmo-python/activate\n
or
$ /path/to/emmo-python/activate\n
You should now see the name emmo-python
in a parenthesis in your console, letting you know you have activated and are currently using the emmo-python
virtual environment.
To deactivate the virtual environment, returning to the system/global environment again, run:
(emmo-python) $ deactivate\n
Expectation
From here on, all commands expect you to have activated your virtual environment, if you are using one, unless stated otherwise.
"},{"location":"developers/setup/#installation","title":"Installation","text":"To install the package, please do not install from PyPI. Instead you should clone the repository from GitHub:
$ git clone https://github.com/emmo-repo/EMMOntoPy.git\n
or, if you are using an SSH connection to GitHub, you can instead clone via:
$ git clone git@github.com:emmo-repo/EMMOntoPy.git\n
Then enter into the newly cloned EMMOntoPy
directory (cd EMMOntoPy
) and run:
$ pip install -U -e .[dev]\n$ pre-commit install\n
This will install the EMMOntoPy Python package, including all dependencies and requirements for building and serving (locally) the documentation and running unit tests.
The second line installs the pre-commit
hooks defined in the .pre-commit-config.yaml
file. pre-commit
is a tool that runs immediately prior to you creating new commits (git commit
), and checks all the changes, automatically updates the API reference in the documentation and much more. Mainly, it helps to ensure that the package stays nicely formattet, safe, and user-friendly for developers.
There are a few non-Python dependencies that EMMOntoPy relies on as well. These can be installed by running (on a Debian system):
$ sudo apt-get update && sudo apt-get install -y graphviz openjdk-11-jre-headless\n
If you are on a non-Debian system (Debian, Ubuntu, ...), please check which package manager you are using and find packages for graphviz
and openjdk
minimum version 11.
It is good practice to test the integrity of the installation and that all necessary dependencies are correctly installed.
You can run unit tests, to check the integrity of the Python functionality, by running:
$ pytest\n
If all has installed and is running correctly, you should not have any failures, but perhaps some warnings (deprecation warnings) in the test summary.
"},{"location":"developers/testing/","title":"Testing and tooling","text":""},{"location":"developers/testing/#unit-testing","title":"Unit testing","text":"The PyTest framework is used for testing the EMMOntoPy package. It is a unit testing framework with a plugin system, sporting an extensive plugin library as well as a sound fixture injection system.
To run the tests locally install the package with the dev
extra (see the developer's setup guide) and run:
$ pytest\n=== test session starts ===\n...\n
To understand what options you have, run pytest --help
.
Several tools are used to maintain the package, keeping it secure, readable, and easing maintenance.
"},{"location":"developers/testing/#mypy","title":"Mypy","text":"Mypy is a static type checker for Python.
Documentation: mypy.readthedocs.io
The signs of this tool will be found in the code especially through the typing.TYPE_CHECKING
boolean variable, which will be used in the current way:
from typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import List\n
Since TYPE_CHECKING
is False
at runtime, the if
-block will not be run as part of running the script or module or if importing the module. However, when Mypy runs to check the static typing, it forcefully runs these blocks, considering TYPE_CHECKING
to be True
(see the typing.TYPE_CHECKING
section in the Mypy documentation).
This means the imports in the if
-block are meant to only be used for static typing, helping developers to understand the intention of the code as well as to check the invoked methods make sense (through Mypy).
This directory contains the needed templates, introductory text and figures for generating the full EMMO documentation using ontodoc
. Since the introduction is written in markdown, pandoc is required for both pdf and html generation.
For a standalone html documentation including all inferred relations, enter this directory and run:
ontodoc --template=emmo.md --format=html emmo-inferred emmo.html\n
Pandoc options may be adjusted with the files pandoc-options.yaml and pandoc-html-options.yaml.
Similarly, for generating pdf documentation, enter this directory and run:
ontodoc --template=emmo.md emmo-inferred emmo.pdf\n
By default, we have configured pandoc to use xelatex for better unicode support. It is possible to change these settings in pandoc-options.yaml and pandoc-pdf-options.yaml.
"},{"location":"examples/emmodoc/#content-of-this-directory","title":"Content of this directory","text":""},{"location":"examples/emmodoc/#ontodoc-templates-with-introductory-text-and-document-layout","title":"ontodoc
templates with introductory text and document layout","text":"pandoc
configuration files","text":"For simple html documentation, you can skip all input files and simply run ontodoc
as
ontodoc --format=simple-html YOUR_ONTO.owl YOUR_ONTO.html\n
It is also possible to include ontodoc templates using the --template
option for adding additional information and structure the document. In this case the template may only contain ontodoc
pre-processer directives and inline html, but not markdown.
In order to produce output in pdf (or any other output format supported by pandoc), you can write your ontodoc
template in markdown (with ontodoc
pre-processer directives) and follow these steps to get started:
pandoc-
to a new directory.input-files
to the name of your new yaml metadata file.logo
to the path of your logo (or remove it).titlegraphic
to the path of your title figure (or remove it).ontodoc
template files with additional information about your ontology and document layout.That should be it. Good luck!
"},{"location":"examples/emmodoc/classes/","title":"Classes","text":"%% %% This file %% This is Markdown file, except of lines starting with %% will %% be stripped off. %%
%HEADER \"EMMO Classes\" level=1
emmo is a class representing the collection of all the individuals (signs) that are used in the ontology. Individuals are declared by the EMMO users when they want to apply the EMMO to represent the world.
%BRANCHHEAD EMMO The root of all classes used to represent the world. It has two children; collection and item.
collection is the class representing the collection of all the individuals (signs) that represents a collection of non-connected real world objects.
item Is the class that collects all the individuals that are members of a set (it's the most comprehensive set individual). It is the branch of mereotopology.
%% - based on has_part mereological relation that can be axiomatically defined %% - a fusion is the sum of its parts (e.g. a car is made of several %% mechanical parts, an molecule is made of nuclei and electrons) %% - a fusion is of the same entity type as its parts (e.g. a physical %% entity is made of physical entities parts) %% - a fusion can be partitioned in more than one way %BRANCH EMMO
%BRANCHDOC Elementary %BRANCHDOC Perspective
%BRANCHDOC Holistic %BRANCHDOC Semiotics %BRANCHDOC Sign %BRANCHDOC Interpreter %BRANCHDOC Object %BRANCHDOC Conventional %BRANCHDOC Property %BRANCHDOC Icon %BRANCHDOC Process
%BRANCHDOC Perceptual %BRANCHDOC Graphical %BRANCHDOC Geometrical %BRANCHDOC Symbol %BRANCHDOC Mathematical %BRANCHDOC MathematicalSymbol %BRANCHDOC MathematicalModel %BRANCHDOC MathematicalOperator %BRANCHDOC Metrological %BRANCHDOC PhysicalDimension rankdir=RL %BRANCHDOC PhysicalQuantity %BRANCHDOC Number %BRANCHDOC MeasurementUnit %BRANCHDOC UTF8 %BRANCHDOC SIBaseUnit %BRANCHDOC SISpecialUnit rankdir=RL %BRANCHDOC PrefixedUnit %BRANCHDOC MetricPrefix rankdir=RL %BRANCHDOC Quantity %BRANCHDOC BaseQuantity %BRANCHDOC DerivedQuantity rankdir=RL %BRANCHDOC PhysicalConstant
%BRANCHDOC Reductionistic %BRANCHDOC Expression
%BRANCHDOC Physicalistic %BRANCHDOC ElementaryParticle
"},{"location":"examples/emmodoc/classes/#branchdoc-subatomic","title":"%BRANCHDOC Subatomic","text":"%BRANCHDOC Matter %BRANCHDOC Fluid %BRANCHDOC Mixture %BRANCHDOC StateOfMatter
"},{"location":"examples/emmodoc/emmo/","title":"Emmo","text":"%% %% This is the main Markdown input file for the EMMO documentation. %% %% Lines starting with a % are pre-processor directives. %%
%INCLUDE introduction.md
%INCLUDE relations.md
%INCLUDE classes.md
%HEADER Individuals level=1 %ALL individuals
%HEADER Appendix level=1
%HEADER \"The complete taxonomy of EMMO relations\" level=2 %BRANCHFIG EMMORelation caption='The complete taxonomy of EMMO relations.' terminated=0 relations=all edgelabels=0
%HEADER \"The taxonomy of EMMO classes\" level=2 %BRANCHFIG EMMO caption='The almost complete taxonomy of EMMO classes. Only physical quantities and constants are left out.' terminated=0 relations=isA edgelabels=0 leaves=PhysicalDimension,BaseQuantity,DerivedQuantity,ExactConstant,MeasuredConstant,SIBaseUnit,SISpecialUnit,MetricPrefix,UTF8
"},{"location":"examples/emmodoc/important_concepts/","title":"Important concepts","text":""},{"location":"examples/emmodoc/important_concepts/#important-concepts","title":"Important concepts","text":""},{"location":"examples/emmodoc/important_concepts/#mereotopological-composition","title":"Mereotopological composition","text":""},{"location":"examples/emmodoc/important_concepts/#substrate","title":"Substrate","text":"A substrate
represents the place (in general sense) in which every real world item exists. It provides the dimensions of existence for real world entities. This follows from the fact that everything that exists is placed somewhere in space and time. Hence, its space and time coordinates can be used to identify it.
Substrates are always topologically connected spaces. A topological space, X, is said to be disconnected if it is the union of two disjoint non-empty open sets. Otherwise, X is said to be connected.
substrate
is the superclass of space
, time
and their combinations, like spacetime
.
Following Kant, space and time are a priori forms of intuition, i.e. they are the substrate upon which we place our intuitions, assigning space and time coordinates to them.
"},{"location":"examples/emmodoc/important_concepts/#hybrid","title":"Hybrid","text":"A hybrid
is the combination of space
and time
. It has the subclasses world_line
(0D space + 1D time), world_sheet
(1D space + 1D time), world_volume
(2D space + 1D time) and spacetime
(3D space + 1D time).
EMMO represents real world entities as subclasses of spacetime
. A spacetime
is valid for all reference systems (as required by the theory of relativity).
matter
is used to represent a group of elementary
in an enclosing spacetime
. As illustrated in the figure, a matter
is an elementary
or a composition of other matter
and vacuum
.
In EMMO matter
is always a 4D spacetime. This is a fundamental difference between EMMO and most other ontologies.
In order to describe the real world, we must also take into account the vacuum between the elementaries that composes higher granularity level entity (e.g. an atom).
In EMMO vacuum
is defined as a spacetime
that has no elementary
parts.
An existent
is defined as a matter
that unfolds in time as a succession of states. It is used to represent the whole life of a complex but structured state-changing matter
entity, like e.g. an atom that becomes ionised and then recombines with an electron.
On the contrary, a matter and not existent
entity is something \"amorphous\", randomly collected and not classifiable by common terms or definitions. That is a heterogeneous heap of elementary
, appearing and disappearing in time.
A state
is matter in a particular configurational state. It is defined as having spatial direct parts that persist (do not change) throughout the lifetime of the state
. Hence, a state
is like a snapshot of a physical in a finite time interval.
The use of spatial direct parthood in the definition of state
means that a state
cannot overlap in space with another state
.
An important feature of states, that follows from the fact that they are spacetime
, is that they constitute a finite time interval.
The basic assumption of decomposition in EMMO, is that the most basic manifestation of matter
is represented by a subclass of spacetime
called elementary
.
The elementary
class defines the \"atomic\" (undividable) level in EMMO. A generic matter
can always be decomposed in proper parts down to the elementary
level using proper parthood. An elementary
can still be decomposed in temporal parts, that are themselves elementary
.
Example of elementaries are electrons, photons and quarks.
"},{"location":"examples/emmodoc/important_concepts/#granularity-direct-parthood","title":"Granularity - direct parthood","text":"Granularity is a central concept of EMMO, which allows the user to percieve the world at different levels of detail (granularity) that follow physics and materials science perspectives.
Every material in EMMO is placed on a granularity level and the ontology gives information about the direct upper and direct lower level classes. This is done with the non-transitive is_direct_part_of
relation.
Granularity is a defined class and is useful sine a reasoner automatically can put the individuals defined by the user under a generic class that clearly expresses the types of its compositional parts.
"},{"location":"examples/emmodoc/important_concepts/#mathematical-entities","title":"Mathematical entities","text":"The class mathematical_entity
represents fundamental elements of mathematical expressions, like numbers, variables, unknowns and equations. Mathematical entities are pure mathematical and have no physical unit.
A natural_law
is an abstraction for a series of experiments that tries to define a common cause and effect of the time evolution of a set of interacting participants. It is (by definition) a pre-mathematical entity.
The natural_law
class is defined as
is_abstraction_for some experiment\n
It can be represented e.g. as a thought in the mind of the experimentalist, a sketch and textual description in a book of science.
physical_law
and material_law
are, according to the RoMM and CWA, the laws behind physical equations and material relations, respectively.
Properties are abstracts that are related to a specific material entity with the relation has_property, but that depend on a specific observation process, participated by a specific observer, who catch the physical entity behaviour that is abstracted as a property.
Properties enable us to connect a measured property to the measurement process and the measurement instrument.
"},{"location":"examples/emmodoc/introduction/","title":"Introduction","text":"EMMO is a multidisciplinary effort to develop a standard representational framework (the ontology) based on current materials modelling knowledge, including physical sciences, analytical philosophy and information and communication technologies. This multidisciplinarity is illustrated by the figure on the title page. It provides the connection between the physical world, materials characterisation world and materials modelling world.
EMMO is based on and is consistent with the Review of Materials Modelling, CEN Workshop Agreement and MODA template. However, while these efforts are written for humans, EMMO is defined using the Web Ontology Language (OWL), which is machine readable and allows for machine reasoning. In terms of semantic representation, EMMO brings everything to a much higher level than these foundations.
As illustrated in the figure below, EMMO covers all aspects of materials modelling and characterisation, including:
EMMO is released under the Creative Commons license and is available at emmo.info/. The OWL2-DL sources are available in RDF/XML format.
"},{"location":"examples/emmodoc/introduction/#what-is-an-ontology","title":"What is an ontology","text":"In short, an ontology is a specification of a conceptualization. The word ontology has a long history in philosophy, in which it refers to the subject of existence. The so-called ontological argument for the existence of God was proposed by Anselm of Canterbury in 1078. He defined God as \"that than which nothing greater can be thought\", and argued that \"if the greatest possible being exists in the mind, it must also exist in reality. If it only exists in the mind, then an even greater being must be possible -- one which exists both in the mind and in reality\". Even though this example has little to do with todays use of ontologies in e.g. computer science, it illustrates the basic idea; the ontology defines some basic premises (concepts and relations between them) from which it is possible reason to gain new knowledge.
For a more elaborated and modern definition of the ontology we refer the reader to the one provided by Tom Gruber (2009). Another useful introduction to ontologies is the paper Ontology Development 101: A Guide to Creating Your First Ontology by Noy and McGuinness (2001), which is based on the Protege sortware, with which EMMO has been developed.
A taxonomy is a hierarchical representation of classes and subclasses connected via is_a
relations. Hence, it is a subset of the ontology excluding all but the is_a
relations. The main use of taxonomies is for the organisation of classifications. The figure shows a simple example of a taxonomy illustrating a categorisation of four classes into a hierarchy of more higher of levels of generality.
In EMMO, the taxonomy is a rooted directed acyclic graph (DAG). This is important since many classification methods relies on this property, see e.g. Valentini (2014) and Robison et al (2015). Note, that EMMO is a DAG does not prevent some classes from having more than one parent. A Variable
is for instance both a Mathematical
and a Symbol
. See appendix for the full EMMO taxonomy.
Individuals are the basic, \"ground level\" components of EMMO. They may include concrete objects such as cars, flowers, stars, persons and molecules, as well as abstract individuals such as a measured height, a specific equation and software programs.
Individuals possess attributes in form of axioms that are defined by the user (interpreter) upon declaration.
"},{"location":"examples/emmodoc/introduction/#classes","title":"Classes","text":"Classes represent concepts. They are the building blocks that we use to create an ontology as a representation of knowledge. We distinguish between defined and non-defined classes.
Defined classes are defined by the requirements for being a member of the class. In the graphical representations of EMMO, defined classes are orange. For instance, in the graph of the top-level entity branch below, The root EMMO
and a defined class (defined to be the disjoint union of Item
and Collection
).
Non-defined classes are defined as an abstract group of objects, whose members are defined as belonging to the class. They are yellow in the graphical representations.
%BRANCHFIG EMMO leaves=Perspective,Elementary caption='Example of the top-level branch of EMMO showing some classes and relationships between them.' width=460
"},{"location":"examples/emmodoc/introduction/#axioms","title":"Axioms","text":"Axioms are propositions in a logical framework that define the relations between the individuals and classes. They are used to categorise individuals in classes and to define the defined classes.
The simplest form of a class axiom is a class description that just states the existence of the class and gives it an unique identifier. In order to provide more knowledge about the class, class axioms typically contain additional components that state necessary and/or sufficient characteristics of the class. OWL contains three language constructs for combining class descriptions into class axioms:
Subclass (rdfs:subClassOf
) allows one to say that the class extension of a class description is a subset of the class extension of another class description.
Equivalence (owl:equivalentClass
) allows one to say that a class description has exactly the same class extension (i.e. the individuals associated with the class) as another class description.
Distjointness (owl:disjointWith
) allows one to say that the class extension of a class description has no members in common with the class extension of another class description.
See the section about Description logic for more information about these language constructs. Axioms are also used to define relations between relations. These are further detailed in the chapter on Relations.
"},{"location":"examples/emmodoc/introduction/#theoretical-foundations","title":"Theoretical foundations","text":"EMMO build upon several theoretical frameworks.
"},{"location":"examples/emmodoc/introduction/#semiotics","title":"Semiotics","text":"Semiotics is the study of meaning-making. It is the dicipline of formulating something that possibly can exist in a defined space and time in the real world.
%%It is introdused in EMMO via the %%semion
class and used as a way to reduce the complexity of a %%physical to a simple sign (symbol). A Sign
is a physical %%entity that can represent another object. %% %%### Set theory %%Set theory is the theory of membership. This is introduced via %%the set
class, representing the collection of all individuals %%(signs) that represent a collection of items. Sets are defined %%via the hasMember
relations.
Mereotopology is the combination of mereology (science of parthood) and topology (mathematical study of the geometrical properties and conservation through deformations). It is introdused via the Item
class and based on the mereotopological
relations. Items in EMMO are always topologically connected in space and time. EMMO makes a strong distinction between membership and parthood relations. In contrast to collections, items can only have parts that are themselves items. For further information, see Casati and Varzi \"Parts and Places\" (1999).
EMMO is strongly based on physics, with the aim of being able to describe all aspects and all domains of physics, from quantum mechanics to continuum, engeneering, chemistry, etc. EMMO is compatible with both the De Broglie - Bohm and the Copenhagen interpretation of quantum mecanics (see Physical
for more comments).
EMMO defines a physics-based parthood hierachy under Physical
by introducing the following concepts (illustrated in the figure below):
Elementary
is the fundamental, non-divisible constituent of entities. In EMMO, elementaries are based on the standard model of physics.
State
is a Physical
whose parts does not change during its life time (at the chosen level of granularity). This is consistent with a state within e.g. thermodynamics.
Existent
is a succession of states.
Metrology is the science of measurements. It introduces units and links them to properties. The description of metrology in EMMO is based on the standards of International System of Quantities (ISQ) and International System of Units (SI).
"},{"location":"examples/emmodoc/introduction/#description-logic","title":"Description logic","text":"Description logic (DL) is a formal knowledge representation language in which the axioms are expressed. It is less expressive than first-order logic (FOL), but commonly used for providing the logical formalism for ontologies and semantic web. EMMO is expressed in the Web Ontology Language (OWL), which in turn is based on DL. This brings along features like reasoning.
Since it is essential to have a basic notion of OWL and DL, we include here a very brief overview. For a proper introduction to OWL and DL, we refer the reader to sources like Grau et.al. (2008), OWL2 Primer and OWL Reference.
OWL distinguishes between six types of class descriptions:
owl:oneOf
);owl:someValuesFrom
, owl:allValuesFrom
, owl:hasValue
, owl:cardinality
, owl:minCardinality
, owl:maxCardinality
);owl:intersectionOf
);owl:unionOf
); andowl:complementOf
).Except for the first, all of these refer to defined classes. The table below shows the notation in OWL, DL and the Manchester OWL syntax, all commonly used for the definitions. The Manchester syntax is used by Protege and is designed to not use DL symbols and to be easy and quick to read and write. Several other syntaxes exist for DL. An interesting example is the pure Python syntax proposed by Lamy (2017), which is used in the open source Owlready2 Python package. The Python API for EMMO is also based on Owlready2.
DL Manchester Python + Owlready2 Read Meaning --------------- ----------------- ------------------- ------------------- -------------------- Constants
$\\top$ Thing top A special class with every individual as an instance
$\\bot$ Nothing bottom The empty class
Axioms
$A\\doteq B$ A is defined to be Class definition equal to B
$A\\sqsubseteq B$ A subclass_of B class A(B): ... all A are B Class inclusion
issubclass(A, B) Test for *inclusion*\n
$A\\equiv B$ A equivalent_to B A.equivalent_to.append(B) A is equivalent to B Class equivalence
B in A.equivalent_to Test for equivalence\n
$a:A$ a is_a A a = A() a is a A Class assertion (instantiation)
isinstance(a, A) Test for instance of\n
$(a,b):R$ a object property a.R.append(b) a is R-related to b Property assertion assertion b
$(a,n):R$ a data property a.R.append(n) a is R-related to n Data assertion assertion n
Constructions
$A\\sqcap B$ A and B A & B A and B Class intersection (conjunction)
$A\\sqcup B$ A or B A | B A or B Class union (disjunction)
$\\lnot A$ not A Not(A) not A Class complement (negation)
${a, b, ...}$ {a, b, ...} OneOf([a, b, ...]) one of a, b, ... Class enumeration
$S\\equiv R^-$ S inverse_of R Inverse(R) S is inverse of R Property inverse
S.inverse == R Test for *inverse*\n
$\\forall R.A$ R only A R.only(A) all A with R Universal restriction
$\\exists R.A$ R some A R.some(A) some A with R Existential restriction
$=n R.A$ R exactly n A R.exactly(n, A) Cardinality restriction
$\\leq n R.A$ R min n A R.min(n, A) Minimum cardinality restriction
$\\geq n R.A$ R max n A R.max(n, A) Minimum cardinality restriction
$\\exists R{a}$ R value a R.value(a) Value restriction
Decompositions
$A\\sqcup B A disjoint with B AllDisjoint([A, B]) A disjoint with B Disjoint \\sqsubseteq\\bot$
B in A.disjoints() Test for disjointness\n
$\\exists R.\\top R domain A R.domain = [A] Classes that the restriction applies to \\sqsubseteq A$
$\\top\\sqsubseteq R range B R.range = [B] All classes that can be the value of the restriction \\forall R.B$
Table: Notation for DL and Protege. A and B are classes, R is an active relation, S is an passive relation, a and b are individuals and n is a literal. Inspired by the Great table of Description Logics.
"},{"location":"examples/emmodoc/introduction/#examples","title":"Examples","text":"Here are some examples of different class descriptions using both the DL and Manchester notation.
"},{"location":"examples/emmodoc/introduction/#equivalence-owlequivalentto","title":"Equivalence (owl:equivalentTo
)","text":"Equivalence ($\\equiv$) defines necessary and sufficient conditions.
Parent is equivalent to mother or father
DL: parent
$\\equiv$ mother
$\\lor$ father
Manchester: parent equivalent_to mother or father
rdf:subclassOf
)","text":"Inclusion ($\\sqsubseteq$) defines necessary conditions.
An employee is a person.
DL: employee
$\\sqsubseteq$ person
Manchester: employee is_a person
owl:oneOf
)","text":"The color of a wine is either white, rose or red:
DL: wine_color
$\\equiv$ {white
, rose
, red
}
Manchester: wine_color equivalent_to {white, rose, red}
owl:someValuesFrom
)","text":"A mother is a woman that has a child (some person):
DL: mother
$\\equiv$ woman
$\\sqcap$ $\\exists$has_child
.person
Manchester: mother equivalent_to woman and has_child some person
owl:allValuesFrom
)","text":"All parents that only have daughters:
DL: parents_with_only_daughters
$\\equiv$ person
$\\sqcap$ $\\forall$has_child
.woman
Manchester: parents_with_only_daughters equivalent_to person and has_child only woman
owl:hasValue
)","text":"The owl:hasValue restriction allows to define classes based on the existence of particular property values. There must be at least one matching property value.
All children of Mary:
DL: Marys_children
$\\equiv$ person
$\\sqcap$ $\\exists$has_parent
.{Mary
}
Manchester: Marys_children equivalent_to person and has_parent value Mary
owl:cardinality
)","text":"The owl:cardinality restrictions ($\\geq$, $\\leq$ or $\\equiv$) allow to define classes based on the maximum (owl:maxCardinality), minimum (owl:minCardinality) or exact (owl:cardinality) number of occurences.
A person with one parent:
DL: half_orphant
$\\equiv$ person
and =1has_parent
.person
Manchester: half_orphant equivalent_to person and has_parent exactly 1 person
owl:intersectionOf
)","text":"Individuals of the intersection ($\\sqcap$) of two classes, are simultaneously instances of both classes.
A man is a person that is male:
DL: man
$\\equiv$ person
$\\sqcap$ male
Manchester: man equivalent_to person and male
owl:unionOf
)","text":"Individuals of the union ($\\sqcup$) of two classes, are either instances of one or both classes.
A person is a man or woman:
DL: person
$\\equiv$ man
$\\sqcup$ woman
Manchester: person equivalent_to man or woman
owl:complementOf
)","text":"Individuals of the complement ($\\lnot$) of a class, are all individuals that are not member of the class.
Not a man:
DL: female
$\\equiv$ $\\lnot$ male
Manchester: female equivalent_to not male
The EMMO ontology is structured in shells, expressed by specific ontology fragments, that extends from fundamental concepts to the application domains, following the dependency flow.
"},{"location":"examples/emmodoc/introduction/#top-level","title":"Top Level","text":"The EMMO top level is the group of fundamental axioms that constitute the philosophical foundation of the EMMO. Adopting a physicalistic/nominalistic perspective, the EMMO defines real world objects as 4D objects that are always extended in space and time (i.e. real world objects cannot be spaceless nor timeless). For this reason abstract objects, i.e. objects that does not extend in space and time, are forbidden in the EMMO.
EMMO is strongly based on the analytical philosophy dicipline semiotic. The role of abstract objects are in EMMO fulfilled by semiotic objects, i.e. real world objects (e.g. symbol or sign) that stand for other real world objects that are to be interpreted by an agent. These symbols appear in actions (semiotic processes) meant to communicate meaning by establishing relationships between symbols (signs).
Another important building block of from analytical philosophy is atomistic mereology applied to 4D objects. The EMMO calls it 'quantum mereology', since the there is a epistemological limit to how fine we can resolve space and time due to the uncertanity principles.
The mereotopology module introduces the fundamental mereotopological concepts and their relations with the real world objects that they represent. The EMMO uses mereotopology as the ground for all the subsequent ontology modules. The concept of topological connection is used to define the first distinction between ontology entities namely the Item and Collection classes. Items are causally self-connected objects, while collections are causally disconnected. Quantum mereology is represented by the Quantum class. This module introduces also the fundamental mereotopological relations used to distinguish between space and time dimensions.
The physical module, defines the Physical objects and the concept of Void that plays a fundamental role in the description of multiscale objects and quantum systems. It also define the Elementary class, that restricts mereological atomism in space.
In EMMO, the only univocally defined real world object is the Item individual called Universe that stands for the universe. Every other real world object is a composition of elementaries up to the most comprehensive object; the Universe. Intermediate objects are not univocally defined, but their definition is provided according to some specific philosophical perspectives. This is an expression of reductionism (i.e. objects are made of sub-objects) and epistemological pluralism (i.e. objects are always defined according to the perspective of an interpreter, or a class of interpreters).
The Perspective class collects the different ways to represent the objects that populate the conceptual region between the elementary and universe levels.
"},{"location":"examples/emmodoc/introduction/#middle-level","title":"Middle Level","text":"The middle level ontologies act as roots for extending the EMMO towards specific application domains.
The Reductionistic perspective class uses the fundamental non-transitive parthood relation, called direct parthood, to provide a powerful granularity description of multiscale real world objects. The EMMO can in principle represents the Universe with direct parthood relations as a direct rooted tree up to its elementary constituents.
The Phenomenic perspective class introduces the concept of real world objects that express of a recognisable pattern in space or time that impress the user. Under this class the EMMO categorises e.g. formal languages, pictures, geometry, mathematics and sounds. Phenomenic objects can be used in a semiotic process as signs.
The Physicalistic perspective class introduces the concept of real world objects that have a meaning for the under applied physics perspective.
The Holistic perspective class introduces the concept of real world objects that unfold in time in a way that has a meaning for the EMMO user, through the definition of the classes Process and Participant. The semiotics module introduces the concepts of semiotics and the Semiosis process that has a Sign, an Object and an Interpreter as participants. This forms the basis in EMMO to represent e.g. models, formal languages, theories, information and properties.
"},{"location":"examples/emmodoc/introduction/#emmo-relations","title":"EMMO relations","text":"All EMMO relations are subrelations of the relations found in the two roots: mereotopological and semiotical. The relation hierarchy extends more vertically (i.e. more subrelations) than horizontally (i.e. less sibling relations), facilitating the categorisation and inferencing of individuals. See also the chapter EMMO Relations.
Imposing all relations to fall under mereotopology or semiotics is how the EMMO force the developers to respect its perspectives. Two entities are related only by contact or parthood (mereotopology) or by standing one for another (semiosis): no other types of relation are possible within the EMMO.
A unique feature in EMMO, is the introduction of direct parthood. As illustrated in the figure below, it is a mereological relation that lacks transitivity. This makes it possible to entities made of parts at different levels of granularity and to go between granularity levels in a well-defined manner. This is paramount for cross scale interoperability. Every material in EMMO is placed on a granularity level and the ontology gives information about the direct upper and direct lower level classes using the non-transitive direct parthood relations.
"},{"location":"examples/emmodoc/introduction/#annotations","title":"Annotations","text":"All entities and relations in EMMO have some attributes, called annotations. In some cases, only the required International Resource Identifier (IRI) and relations are provided. However, descriptive annotations, like elucidation and comment, are planned to be added for all classes and relations. Possible annotations are:
%%### Graphs %%The generated graphs borrow some syntax from the Unified Modelling %%Language (UML), which is a general purpose language for software %%design and modelling. The table below shows the style used for the %%different types of relations and the concept they correspond to in %%UML. %% %%Relation UML arrow UML concept %%------------- ----------- ----------- %%is-a ![img][isa] inheritance %%disjoint_with ![img][djw] association %%equivalent_to ![img][eqt] association %%encloses ![img][rel] aggregation %%has_abstract_part ![img][rel] aggregation %%has_abstraction ![img][rel] aggregation %%has_representation ![img][rel] aggregation %%has_member ![img][rel] aggregation %%has_property ![img][rel] aggregation %% %%Table: Notation for arrow styles used in the graphs. Only active %%relations are listed. Corresponding passive relations use the same %%style. %% %%[isa]: figs/arrow-is_a.png \"inheritance\" %%[djw]: figs/arrow-disjoint_with.png \"association\" %%[eqt]: figs/arrow-equivalent_to.png \"association\" %%[rel]: figs/arrow-relation.png \"aggregation\"
%%All relationships have a direction. In the graphical visualisations, %%the relationships are represented with an arrow pointing from the %%subject to the object. In order to reduce clutter and limit the size %%of the graphs, the relations are abbreviated according to the %%following table: %% %%Relation Abbreviation %%-------- ------------ %%has_part only hp-o %%is_part_of only ipo-o %%has_member some hm-s %%is_member_of some imo-s %%has_abstraction some ha-s %%is_abstraction_of some iao-s %%has_abstract_part only pap-o %%is_abstract_part_of only iapo-o %%has_space_slice some hss-s %%is_space_slice_of some isso-s %%has_time_slice some hts-s %%is_time_slice_of some itso-s %%has_projection some hp-s %%is_projection_of some ipo-s %%has_proper_part some hpp-s %%is_proper_part_of some ippo-s %%has_proper_part_of some hppo-s %%has_spatial_direct_part min hsdp-m %%has_spatial_direct_part some hsdp-s %%has_spatial_direct_part exactly hsdp-e %% %%Table: Abbriviations of relations used in the graphical representation %%of the different subbranches. %% %% %%UML represents classes as a box with three compartments; names, attributes %%and operators. However, since the classes in EMMO have no operators and %%since it gives little meaning to include the OWL annotations as attributes, %%we simply represent the classes as boxes by a name. %% %%As already mentioned, defined classes are colored orange, while %%undefined classes are yellow. %% %% %%
"},{"location":"examples/emmodoc/relations/","title":"Relations","text":"%% %% This file %% This is Markdown file, except of lines starting with %% will %% be stripped off. %%
%HEADER \"EMMO Relations\" level=1
In the language of OWL, relations are called properties. However, since relations describe relations between classes and individuals and since properties has an other meaning in EMMO, we only call them relations.
Resource Description Framework (RDF) is a W3C standard that is widely used for describing informations on the web and is one of the standards that OWL builds on. RDF expresses information in form of subject-predicate-object triplets. The subject and object are resources (aka items to describe) and the predicate expresses a relationship between the subject and the object.
In OWL are the subject and object classes or individuals (or data) while the predicate is a relation. An example of an relationship is the statement dog is_a animal. Here dog
is the subject, is_a
the predicate and animal
the object.
%%We distinguish between %%active relations
where the subject is acting on the object and %%passive relations
where the subject is acted on by the object.
OWL distingues between object properties, that link classes or individuals to classes or individuals, and data properties that link individuals to data values. Since EMMO only deals with classes, we will only be discussing object properties. However, in actual simulation or characterisation applications build on EMMO, datatype propertyes will be important.
The characteristics of the different properties are described by the following property axioms:
rdf:subPropertyOf
is used to define that a property is a subproperty of some other property. For instance, in the figure below showing the relation branch, we see that active_relation
is a subproperty or relation
. The rdf:subPropertyOf
axioms forms a taxonomy-like tree for relations.
owl:equivalentProperty
states that two properties have the same property extension.
owl:inverseOf
axioms relate active relations to their corresponding passive relations, and vice versa. The root relation relation
is its own inverse.
owl:FunctionalProperty
is a property that can have only one (unique) value y for each instance x, i.e. there cannot be two distinct values y1 and y2 such that the pairs (x,y1) and (x,y2) are both instances of this property. Both object properties and datatype properties can be declared as \"functional\".
owl:InverseFunctionalProperty
.
owl:TransitiveProperty
states that if a pair (x,y) is an instance of P, and the pair (y,z) is instance of P, then we can infer that the pair (x,z) is also an instance of P.
owl:SymmetricProperty
states that if the pair (x,y) is an instance of P, then the pair (y,x) is also an instance of P. A popular example of a symmetric property is the siblingOf
relation.
rdfs:domain
specifies which classes the property applies to. Or said differently, the valid values of the subject in a subject-predicate-object triplet.
rdfs:range
specifies the property extension, i.e. the valid values of the object in a subject-predicate-object triplet.
%HEADER \"Root of EMMO relations\" level=2 %BRANCHFIG EMMORelation caption=\"Top-level of the EMMO relation hierarchy.\" %ENTITY EMMORelation
"},{"location":"examples/emmodoc/relations/#branchdoc-mereotopological","title":"%%BRANCHDOC mereotopological","text":""},{"location":"examples/emmodoc/relations/#branchhead-mereotopological","title":"%BRANCHHEAD mereotopological","text":""},{"location":"examples/emmodoc/relations/#branch-mereotopological","title":"%BRANCH mereotopological","text":""},{"location":"examples/emmodoc/relations/#branchdoc-connected","title":"%BRANCHDOC connected","text":"%BRANCHDOC hasPart
%BRANCHDOC semiotical
"},{"location":"examples/jupyter-visualization/","title":"Visualise an ontology using pyctoscape in Jupyter Notebook","text":""},{"location":"examples/jupyter-visualization/#installation-instructions","title":"Installation instructions","text":"In a terminal, run:
cd /path/to/env/dirs\npython -m venv cytopy # cytopy is my name, you can choose what ouy want\nsource cytopy/bin/activate\ncd /dir/to/EMMOntoPy/\npip install -e .\npip install jupyterlab\npython -m ipykernel install --user --name=cytopy\npip install ipywidgets\npip install nodejs # Note requires that node.js and npm has already been isntalled!\npip install ipycytoscape pydotplus networkx\npip install --upgrade setuptools\njupyter labextension install @jupyter-widgets/jupyterlab-manager\n
"},{"location":"examples/jupyter-visualization/#test-the-notebook","title":"Test the notebook","text":"In a terminal, run:
jupyter-lab\n
That should start jupyter kernel and open a new tab in your browser. In the side pane, select team40.ipynb
and run the notebook.
This directory contains an example xlsx-file for how to document ontology entities (classes, object properties, annotation properties and data properties) in an Excel workbook. This workbook can then be used to generate a new ontology or update an already existing ontology with new entities (existing entities are not updated).
Please refer to the (documentation)[https://emmo-repo.github.io/EMMOntoPy/latest/api_reference/ontopy/excelparser/] for full explanation of capabilities.
The file tool/onto.xlsx
contains examples on how to do things correctly as well as incorrectly. The tool will by default exit without generating the ontology if it detects concepts defined incorrectly. However, if the argument force is set to True, it will skip concepts that are erroneously defined and generate the ontology with what is availble.
To run the tool directly
cd tool # Since the excel file provides a relative path to an imported ontology\nexcel2onto onto.xlsx # This will fail\nexcel2onto --force onto.xlsx\n
We suggest developing your excelsheet without fails as once it starts getting big it is difficult to see what is wrong or correct. It is also possible to generate the ontology in python. Look at the script make_onto.py for an example.
That should be it. Good luck!
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#emmontopy","title":"EMMOntoPy","text":"Library for representing and working with ontologies in Python.
EMMOntoPy is a Python package based on the excellent Owlready2, which provides a natural and intuitive representation of ontologies in Python. EMMOntoPy extends Owlready2 and adds additional functionality, like accessing entities by label, reasoning with FaCT++ and parsing logical expressions in Manchester syntax. It also includes a set of tools, like creating an ontology from an Excel sheet, generation of reference documentation of ontologies and visualisation of ontologies graphically. EMMOntoPy is freely available for on GitHub and on PyPI under the permissive open source BSD 3-Clause license.
EMMOntoPy was originally developed to work effectively with the Elemental Multiperspective Material Ontology (EMMO) and EMMO-based domain ontologies. It has now two sub-packages, ontopy
and emmopy
, where ontopy
is a general package to work with any OWL ontology, while emmopy
provides extra features that are specific to EMMO.
Owlready2, and thereby also EMMOntoPy, represents OWL classes and individuals in Python as classes and instances. OWL properties are represented as Python attributes. Hence, it provides a new dot notation for representing ontologies as valid Python code. The notation is simple and easy to understand and write for people with some knowledge of OWL and Python. Since Python is a versatile programming language, Owlready2 does not only allow for representation of OWL ontologies, but also to work with them programmatically, including interpretation, modification and generation. Some of the additional features provided by EMMOntoPy are are listed below:
"},{"location":"#access-by-label","title":"Access by label","text":"In Owlready2 ontological entities, like classes, properties and individuals are accessed by the name-part of their IRI (i.e. everything that follows after the final slash or hash in the IRI). This is very inconvenient for ontologies like EMMO or Wikidata, that identify ontological entities by long numerical names. For instance, the name-part of the IRI of the Atom class in EMMO is \u2018EMMO_eb77076b_a104_42ac_a065_798b2d2809ad\u2019, which is neither human readable nor easy to write. EMMOntoPy allows to access the entity via its label (or rather skos:prefLabel) \u2018Atom\u2019, which is much more user friendly.
"},{"location":"#turtle-serialisationdeserialisation","title":"Turtle serialisation/deserialisation","text":"The Terse RDF Triple Language (Turtle) is a common syntax and file format for representing ontologies. EMMOntoPy adds support for reading and writing ontologies in turtle format.
"},{"location":"#fact-reasoning","title":"FaCT++ reasoning","text":"Owlready2 has only support for reasoning with HermiT and Pellet. EMMOntoPy adds additional support for the fast tableaux-based [FaCT++ reasoner] for description logics.
"},{"location":"#manchester-syntax","title":"Manchester syntax","text":"Even though the Owlready2 dot notation is clear and easy to read and understand for people who know Python, it is a new syntax that may look foreign for people that are used to working with Prot\u00e9g\u00e9. EMMOntoPy provides support to parse and serialise logical expressions in Manchester syntax, making it possible to create tools that will be much more familiar to work with for people used to working with Prot\u00e9g\u00e9.
"},{"location":"#visualisation","title":"Visualisation","text":"EMMOntoPy provides a Python module for graphical visualisation of ontologies. This module allows to graphically represent not only the taxonomy, but also restrictions and logical constructs. The classes to include in the graph, can either be specified manually or inferred from the taxonomy (like all subclasses of a give class that are not a subclass of any class in a set of other classes).
"},{"location":"#tools","title":"Tools","text":"EMMOntoPy includes a small set of command-line tools implemented as Python scripts: - ontoconvert
: Converts ontologies between different file formats. It also supports some additional transformation during conversion, like running a reasoner, merging several ontological modules together (squashing), rename IRIs, generate catalogue file and automatic annotation of entities with their source IRI. - ontograph
: Vertasile tool for visualising (parts of) an ontology, utilising the visualisation features mention above. - ontodoc
: Documents an ontology. - excel2onto
: Generate an EMMO-based ontology from an excel file. It is useful for domain experts with limited knowledge of ontologies and that are not used to tools like Prot\u00e9g\u00e9. - ontoversion
: Prints ontology version number. - emmocheck
: A small test framework for checking the consistency of EMMO and EMMO-based domain ontologies and whether they confirm to the EMMO conventions.
ttl
), and more).ontodoc
: A dedicated command line tool for this. You find it in the tools/ sub directory.Matter
in EMMO utilizing the emmopy.get_emmo
function:In [1]: from emmopy import get_emmo\n\nIn [2]: emmo = get_emmo()\n\nIn [3]: emmo.Matter\nOut[3]: physicalistic.Matter\n\nIn [4]: emmo.Matter.is_a\nOut[4]:\n[physicalistic.Physicalistic,\n physical.Physical,\n mereotopology.hasPart.some(physicalistic.Massive),\n physical.hasTemporalPart.only(physicalistic.Matter)]\n
"},{"location":"#documentation-and-examples","title":"Documentation and examples","text":"The Owlready2 documentation is a good starting point. The EMMOntoPy package also has its own dedicated documentation.
This includes a few examples and demos:
demo/vertical shows an example of how EMMO may be used to achieve vertical interoperability. The file define-ontology.py provides a good example for how an EMMO-based application ontology can be defined in Python.
demo/horizontal shows an example of how EMMO may be used to achieve horizontal interoperability. This demo also shows how you can use EMMOntoPy to represent your ontology with the low-level metadata framework DLite. In addition to achieve interoperability, as shown in the demo, DLite also allow you to automatically generate C or Fortran code base on your ontology.
examples/emmodoc shows how the documentation of EMMO is generated using the ontodoc
tool.
Install with:
pip install EMMOntoPy\n
"},{"location":"#required-dependencies","title":"Required Dependencies","text":"ontodoc
).pdfLaTeX or XeLaTeX and the upgreek
LaTeX package (included in texlive-was
on RetHat-based distributions and texlive-latex-extra
on Ubuntu) for generation of pdf documentation. If your ontology contains exotic unicode characters, we recommend XeLaTeX.
Java. Needed for reasoning.
Optional Python packages:
emmocheck
.emmocheck
.ontoversion
-tool.ontoversion
-tool.See docker-instructions.md for how to build a docker image.
"},{"location":"#known-issues","title":"Known issues","text":"ontoconvert
may produce invalid turtle output (if your ontology contains real literals using scientific notation without a dot in the mantissa). This issue was fixed after the release of rdflib 5.0.0. Hence, install the latest rdflib from PyPI (pip install --upgrade rdflib
) or directly from the source code repository: GitHub if you need to serialise to turtle.EMMOntoPy is maintained by EMMC-ASBL. It has mainly been developed by SINTEF, specifically:
The EMMC-ASBL organization takes on the efforts of continuing and expanding on the efforts of the CSA. - MarketPlace; Grant Agreement No: 760173 - OntoTrans; Grant Agreement No: 862136 - BIG-MAP; Grant Agreement No: 957189 - OpenModel; Grant Agreement No: 953167
"},{"location":"CHANGELOG/","title":"Changelog","text":""},{"location":"CHANGELOG/#unreleased-changes-2024-05-29","title":"Unreleased changes (2024-05-29)","text":"Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
"},{"location":"CHANGELOG/#v0701-2024-02-29","title":"v0.7.0.1 (2024-02-29)","text":"Full Changelog
Closed issues:
Merged pull requests:
yield from
#720 (jesper-friis)Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
"},{"location":"CHANGELOG/#v0532-2023-06-15","title":"v0.5.3.2 (2023-06-15)","text":"Full Changelog
Merged pull requests:
Full Changelog
"},{"location":"CHANGELOG/#v0531-2023-06-12","title":"v0.5.3.1 (2023-06-12)","text":"Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Fixed bugs:
Closed issues:
Merged pull requests:
is_defined
into a ThingClass property and improved its documentation. #597 (jesper-friis)Full Changelog
Fixed bugs:
Merged pull requests:
Full Changelog
Fixed bugs:
LegacyVersion
does not exist in packaging.version
#540is_instance_of
property to be iterable #506images/material.png
#495Closed issues:
Merged pull requests:
Full Changelog
Fixed bugs:
bandit
failing #478Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Fixed bugs:
Closed issues:
Merged pull requests:
Full Changelog
Implemented enhancements:
pre-commit
#243Ontology
#228Fixed bugs:
rdflib
import #306get_triples()
method #280Closed issues:
Merged pull requests:
ID!
type instead of String!
#375 (CasperWA)pre-commit
& various tools #245 (CasperWA)Full Changelog
"},{"location":"CHANGELOG/#v012-2021-10-27","title":"v0.1.2 (2021-10-27)","text":"Full Changelog
"},{"location":"CHANGELOG/#v011-2021-10-27","title":"v0.1.1 (2021-10-27)","text":"Full Changelog
"},{"location":"CHANGELOG/#v010-2021-10-27","title":"v0.1.0 (2021-10-27)","text":"Full Changelog
Implemented enhancements:
collections
#236Fixed bugs:
Closed issues:
factpluspluswrapper
folders #213mike
for versioned documentation #197Merged pull requests:
packaging
to list of requirements #256 (CasperWA)collections.abc
when possible #240 (CasperWA)__init__.py
files for FaCT++ wrapper (again) #221 (CasperWA)Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Fixed bugs:
Closed issues:
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Implemented enhancements:
Closed issues:
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Merged pull requests:
Full Changelog
Implemented enhancements:
Merged pull requests:
Full Changelog
"},{"location":"CHANGELOG/#v100-alpha-2-2020-01-11","title":"v1.0.0-alpha-2 (2020-01-11)","text":"Full Changelog
"},{"location":"CHANGELOG/#v100-alpha-1-2020-01-11","title":"v1.0.0-alpha-1 (2020-01-11)","text":"Full Changelog
Closed issues:
Full Changelog
Closed issues:
Merged pull requests:
Full Changelog
Closed issues:
Merged pull requests:
* This Changelog was automatically generated by github_changelog_generator
"},{"location":"LICENSE/","title":"LICENSE","text":"Copyright 2019-2022 SINTEF
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"},{"location":"docker-instructions/","title":"EMMOntoPy Docker","text":""},{"location":"docker-instructions/#clone-project","title":"Clone project","text":"git clone git@github.com:emmo-repo/EMMOntoPy.git\n
"},{"location":"docker-instructions/#build-docker-image","title":"Build Docker image","text":"cd EMMOntoPy\ndocker build -t emmo .\n
"},{"location":"docker-instructions/#run-docker-container","title":"Run Docker container","text":"docker run -it emmo\n
"},{"location":"docker-instructions/#notes","title":"Notes","text":"sync_reasoner
). Append --memory=2GB
to docker run
in order to align the memory limit with the Java runtime environment.It is recommended to instead use the FaCT++ reaonser (now default).
docker build -t emmomount -f mount.Dockerfile .\n
"},{"location":"docker-instructions/#run-docker-container-mountdockerfile","title":"Run Docker container (mount.Dockerfile)","text":"In a unix terminal (Linux)
docker run --rm -it -v $(pwd):/home/user/EMMOntoPy emmomount\n
In PowerShell (Windows 10):
docker run --rm -it -v ${PWD}:/home/user/EMMOntoPy emmomount\n
To install EMMOntoPy package inside container:
cd EMMOntoPy\npip install .\n
"},{"location":"docker-instructions/#notes-on-mounting-on-windows","title":"Notes on mounting on Windows","text":"Allow for mounting of C: in Docker (as administrator). Docker (rightclick in system tray) -> Settings -> Shared Drives -> tick of C -> Apply.
Run the following command in PowerShell:
Set-NetConnectionProfile -interfacealias \"vEthernet (DockerNAT)\" -NetworkCategory Private\n
Content:
emmocheck
","text":"Tool for checking that ontologies conform to EMMO conventions.
"},{"location":"tools-instructions/#usage","title":"Usage","text":"emmocheck [options] iri\n
"},{"location":"tools-instructions/#options","title":"Options","text":"positional arguments:\n iri File name or URI to the ontology to test.\n\noptional arguments:\n -h, --help show this help message and exit\n --database FILENAME, -d FILENAME\n Load ontology from Owlready2 sqlite3 database. The\n `iri` argument should in this case be the IRI of the\n ontology you want to check.\n --local, -l Load imported ontologies locally. Their paths are\n specified in Prot\u00e9g\u00e9 catalog files or via the --path\n option. The IRI should be a file name.\n --catalog-file CATALOG_FILE\n Name of Prot\u00e9g\u00e9 catalog file in the same folder as the\n ontology. This option is used together with --local\n and defaults to \"catalog-v001.xml\".\n --path PATH Paths where imported ontologies can be found. May be\n provided as a comma-separated string and/or with\n multiple --path options.\n --check-imported, -i Whether to check imported ontologies.\n --verbose, -v Verbosity level.\n --configfile CONFIGFILE, -c CONFIGFILE\n A yaml file with additional test configurations.\n --skip, -s ShellPattern\n Shell pattern matching tests to skip. This option may be\n provided multiple times.\n --url-from-catalog, -u\n Get url from catalog file.\n --ignore-namespace, -n\n Namespace to be ignored. Can be given multiple times\n
"},{"location":"tools-instructions/#examples","title":"Examples","text":" emmocheck http://emmo.info/emmo/1.0.0-alpha2\n emmocheck --database demo.sqlite3 http://www.emmc.info/emmc-csa/demo#\n emmocheck -l emmo.owl (in folder to which emmo was downloaded locally)\n emmocheck --check-imported --ignore-namespace=physicalistic --verbose --url-from-catalog emmo.owl (in folder with downloaded EMMO)\n emmocheck --check-imported --local --url-from-catalog --skip test_namespace emmo.owl\n
"},{"location":"tools-instructions/#example-configuration-file","title":"Example configuration file","text":"Example of YAML configuration file provided with the --configfile
option that will omit myunits.MyUnitCategory1
and myunits.MyUnitCategory1
from the unit dimensions test.
test_unit_dimensions:\n exceptions:\n - myunits.MyUnitCategory1\n - myunits.MyUnitCategory2\n
"},{"location":"tools-instructions/#ontoversion","title":"ontoversion
","text":"Prints version of an ontology to standard output.
This script uses RDFLib and the versionIRI tag of the ontology to infer the version.
"},{"location":"tools-instructions/#usage_1","title":"Usage","text":"ontoversion [options] iri\n
"},{"location":"tools-instructions/#special-dependencies","title":"Special dependencies","text":"rdflib
(Python package)positional arguments:\n IRI IRI/file to OWL source to extract the version from.\n\noptional arguments:\n -h, --help show this help message and exit\n --format FORMAT, -f FORMAT\n OWL format. Default is \"xml\".\n
"},{"location":"tools-instructions/#examples_1","title":"Examples","text":"ontoversion http://emmo.info/emmo/1.0.0-alpha\n
Warning
Fails if ontology has no versionIRI tag.
"},{"location":"tools-instructions/#ontograph","title":"ontograph
","text":"Tool for visualizing ontologies.
"},{"location":"tools-instructions/#usage_2","title":"Usage","text":"ontograph [options] iri [output]\n
"},{"location":"tools-instructions/#dependencies","title":"Dependencies","text":"positional arguments:\n IRI File name or URI of the ontology to visualise.\n output name of output file.\n\noptional arguments:\n -h, --help show this help message and exit\n --format FORMAT, -f FORMAT\n Format of output file. By default it is inferred from\n the output file extension.\n --database FILENAME, -d FILENAME\n Load ontology from Owlready2 sqlite3 database. The\n `iri` argument should in this case be the IRI of the\n ontology you want to visualise.\n --local, -l Load imported ontologies locally. Their paths are\n specified in Prot\u00e9g\u00e9 catalog files or via the --path\n option. The IRI should be a file name.\n --catalog-file CATALOG_FILE\n Name of Prot\u00e9g\u00e9 catalog file in the same folder as the\n ontology. This option is used together with --local\n and defaults to \"catalog-v001.xml\".\n --path PATH Paths where imported ontologies can be found. May be\n provided as a comma-separated string and/or with\n multiple --path options.\n --reasoner [{FaCT++,HermiT,Pellet}]\n Run given reasoner on the ontology. Valid reasoners\n are \"FaCT++\" (default), \"HermiT\" and \"Pellet\".\n Note: FaCT++ is preferred with EMMO.\n --root ROOT, -r ROOT Name of root node in the graph. Defaults to all\n classes.\n --leaves LEAVES Leaf nodes for plotting sub-graphs. May be provided\n as a comma-separated string and/or with multiple\n --leaves options.\n --exclude EXCLUDE, -E EXCLUDE\n Nodes, including their subclasses, to exclude from\n sub-graphs. May be provided as a comma-separated\n string and/or with multiple --exclude options.\n --parents N, -p N Adds N levels of parents to graph.\n --relations RELATIONS, -R RELATIONS\n Comma-separated string of relations to visualise.\n Default is \"isA\". \"all\" means include all relations.\n --edgelabels, -e Whether to add labels to edges.\n --addnodes, -n Whether to add missing target nodes in relations.\n --addconstructs, -c Whether to add nodes representing class constructs.\n --rankdir {BT,TB,RL,LR}\n Graph direction (from leaves to root). Possible values\n are: \"BT\" (bottom-top, default), \"TB\" (top-bottom),\n \"RL\" (right-left) and \"LR\" (left-right).\n --style-file JSON_FILE, -s JSON_FILE\n A json file with style definitions.\n --legend, -L Whether to add a legend to the graph.\n --generate-style-file JSON_FILE, -S JSON_FILE\n Write default style file to a json file.\n --plot-modules, -m Whether to plot module inter-dependencies instead of\n their content.\n --display, -D Whether to display graph.\n
"},{"location":"tools-instructions/#examples_2","title":"Examples","text":"ontograph --relations=all --legend --format=pdf emmo-inferred emmo.pdf # complete ontology\nontograph --root=Holistic --relations=hasInput,hasOutput,hasTemporaryParticipant,hasAgent --parents=2 --legend --leaves=Measurement,Manufacturing,CompleteManufacturing,ManufacturedProduct,CommercialProduct,Manufacturer --format=png --exclude=Task,Workflow,Computation,MaterialTreatment emmo-inferred measurement.png\nontograph --root=Material --relations=all --legend --format=png emmo-inferred material.png\n
The figure below is generated with the last command in the list above. "},{"location":"tools-instructions/#ontodoc","title":"ontodoc
","text":"Tool for documenting ontologies.
"},{"location":"tools-instructions/#usage_3","title":"Usage","text":"ontodoc [options] iri outfile\n
"},{"location":"tools-instructions/#dependencies_1","title":"Dependencies","text":"positional arguments:\n IRI File name or URI of the ontology to document.\n OUTFILE Output file.\n\n optional arguments:\n -h, --help show this help message and exit\n --database FILENAME, -d FILENAME\n Load ontology from Owlready2 sqlite3 database. The\n `iri` argument should in this case be the IRI of the\n ontology you want to document.\n --local, -l Load imported ontologies locally. Their paths are\n specified in Prot\u00e9g\u00e9 catalog files or via the --path\n option. The IRI should be a file name.\n --imported, -i Include imported ontologies\n --no-catalog, -n Do not read url from catalog even if it exists.\n --catalog-file CATALOG_FILE\n Name of Prot\u00e9g\u00e9 catalog file in the same folder as the\n ontology. This option is used together with --local\n and defaults to \"catalog-v001.xml\".\n --path PATH Paths where imported ontologies can be found. May be\n provided as a comma-separated string and/or with\n multiple --path options.\n --reasoner [{FaCT++,HermiT,Pellet}]\n Run given reasoner on the ontology. Valid reasoners\n are \"FaCT++\" (default), \"HermiT\" and \"Pellet\".\n Note: FaCT++ is preferred with EMMO.\n --template FILE, -t FILE\n ontodoc input template. If not provided, a simple\n default template will be used. Don't confuse it with\n the pandoc templates.\n --format FORMAT, -f FORMAT\n Output format. May be \"md\", \"simple-html\" or any other\n format supported by pandoc. By default the format is\n inferred from --output.\n --figdir DIR, -D DIR Default directory to store generated figures. If a\n relative path is given, it is relative to the template\n (see --template), or the current directory, if\n --template is not given. Default: \"genfigs\"\n --figformat FIGFORMAT, -F FIGFORMAT\n Format for generated figures. The default is inferred\n from --format.\"\n --max-figwidth MAX_FIGWIDTH, -w MAX_FIGWIDTH\n Maximum figure width. The default is inferred from\n --format.\n --pandoc-option STRING, -p STRING\n Additional pandoc long options overriding those read\n from --pandoc-option-file. It is possible to remove\n pandoc option --XXX with \"--pandoc-option=no-XXX\".\n This option may be provided multiple times.\n --pandoc-option-file FILE, -P FILE\n YAML file with additional pandoc options. Note, that\n default pandoc options are read from the files\n \"pandoc-options.yaml\" and \"pandoc-FORMAT-options.yaml\"\n (where FORMAT is format specified with --format). This\n option allows to override the defaults and add\n additional pandoc options. This option may be provided\n multiple times.\n --keep-generated FILE, -k FILE\n Keep a copy of generated markdown input file for\n pandoc (for debugging).\n
"},{"location":"tools-instructions/#examples_3","title":"Examples","text":"Basic documentation of an ontology demo.owl
can be generated with:
ontodoc --format=simple-html --local demo.owl demo.html\n
See examples/emmodoc/README.md for how this tool is used to generate the html and pdf documentation of EMMO itself.
"},{"location":"tools-instructions/#ontoconvert","title":"ontoconvert
","text":"Tool for converting between different ontology formats.
"},{"location":"tools-instructions/#usage_4","title":"Usage","text":"ontoconvert [options] inputfile outputfile\n
"},{"location":"tools-instructions/#dependencies_2","title":"Dependencies","text":"rdflib
(Python package)positional arguments:\n INPUTFILE Name of inputfile.\n OUTPUTFILE Name og output file.\n\n optional arguments:\n -h, --help show this help message and exit\n --input-format, -f INPUT_FORMAT\n Inputformat. Default is to infer from input.\n --output-format, -F OUTPUT_FORMAT\n Default is to infer from output.\n --no-catalog, -n Do not read catalog even if it exists.\n --inferred, -i Add additional relations inferred by the FaCT++ reasoner to the converted ontology. Implies --squash.\n --base-iri BASE_IRI, -b BASE_IRI\n Base iri of inferred ontology. The default is the base\n iri of the input ontology with \"-inferred\" appended to\n it. Used together with --inferred.\n\n --recursive, -r The output is written to the directories matching the input. This requires Protege catalog files to be present.\n --squash, -s Squash imported ontologies into a single output file.\n
"},{"location":"tools-instructions/#examples_4","title":"Examples","text":"ontoconvert --recursive emmo.ttl owl/emmo.owl\nontoconvert --inferred emmo.ttl emmo-inferred.owl\n
Note, it is then required to add the argument only_local=True
when loading the locally converted ontology in EMMOntoPy, e.g.:
from ontopy import get_ontology\n\nemmo_ontology = get_ontology(\"emmo.owl\").load(only_local=True)\n
Since the catalog file will be overwritten in the above example writing output to a separate directory is useful.
ontoconvert --recursive emmo.ttl owl/emmo.owl\n
"},{"location":"tools-instructions/#bugs","title":"Bugs","text":"Since parsing the results from the reasoner is currently broken in Owlready2 (v0.37), a workaround has been added to ontoconvert. This workaround only only supports FaCT++. Hence, HermiT and Pellet are currently not available.
"},{"location":"tools-instructions/#excel2onto","title":"excel2onto
","text":"Tool for converting EMMO-based ontologies from Excel to OWL, making it easy for non-ontologists to make EMMO-based domain ontologies.
The Excel file must be in the format provided by ontology_template.xlsx.
"},{"location":"tools-instructions/#usage_5","title":"Usage","text":"excel2onto [options] excelpath\n
"},{"location":"tools-instructions/#dependencies_3","title":"Dependencies","text":"pandas
(Python package)positional arguments:\n excelpath path to excel book\n\noptions:\n -h, --help show this help message and exit\n --output OUTPUT, -o OUTPUT\n Name of output ontology, \u00b4ontology.ttl\u00b4 is default\n --force, -f Whether to force generation of ontology on non-fatal\n error.\n
See the documentation of the python api for a thorough description of the requirements on the Excel workbook.
"},{"location":"tools-instructions/#examples_5","title":"Examples","text":"Create a new_ontology.ttl
turtle file from the Excel file new_ontology.xlsx
:
excel2onto -o new_ontology.ttl new_ontology.xlsx\n
"},{"location":"tools-instructions/#bugs_1","title":"Bugs","text":"equivalentTo
is currently not supported.
A module for testing an ontology against conventions defined for EMMO.
A YAML file can be provided with additional test configurations.
Example configuration file:
test_unit_dimensions:\n exceptions:\n - myunits.MyUnitCategory1\n - myunits.MyUnitCategory2\n\nskip:\n - name_of_test_to_skip\n\nenable:\n - name_of_test_to_enable\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestEMMOConventions","title":" TestEMMOConventions
","text":"Base class for testing an ontology against EMMO conventions.
Source code inemmopy/emmocheck.py
class TestEMMOConventions(unittest.TestCase):\n \"\"\"Base class for testing an ontology against EMMO conventions.\"\"\"\n\n config = {} # configurations\n\n def get_config(self, string, default=None):\n \"\"\"Returns the configuration specified by `string`.\n\n If configuration is not found in the configuration file, `default` is\n returned.\n\n Sub-configurations can be accessed by separating the components with\n dots, like \"test_namespace.exceptions\".\n \"\"\"\n result = self.config\n try:\n for token in string.split(\".\"):\n result = result[token]\n except KeyError:\n return default\n return result\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestEMMOConventions.get_config","title":"get_config(self, string, default=None)
","text":"Returns the configuration specified by string
.
If configuration is not found in the configuration file, default
is returned.
Sub-configurations can be accessed by separating the components with dots, like \"test_namespace.exceptions\".
Source code inemmopy/emmocheck.py
def get_config(self, string, default=None):\n \"\"\"Returns the configuration specified by `string`.\n\n If configuration is not found in the configuration file, `default` is\n returned.\n\n Sub-configurations can be accessed by separating the components with\n dots, like \"test_namespace.exceptions\".\n \"\"\"\n result = self.config\n try:\n for token in string.split(\".\"):\n result = result[token]\n except KeyError:\n return default\n return result\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions","title":" TestFunctionalEMMOConventions
","text":"Test functional EMMO conventions.
Source code inemmopy/emmocheck.py
class TestFunctionalEMMOConventions(TestEMMOConventions):\n \"\"\"Test functional EMMO conventions.\"\"\"\n\n def test_unit_dimension(self):\n \"\"\"Check that all measurement units have a physical dimension.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"metrology.MultipleUnit\",\n \"metrology.SubMultipleUnit\",\n \"metrology.OffSystemUnit\",\n \"metrology.PrefixedUnit\",\n \"metrology.NonPrefixedUnit\",\n \"metrology.SpecialUnit\",\n \"metrology.DerivedUnit\",\n \"metrology.BaseUnit\",\n \"metrology.UnitSymbol\",\n \"siunits.SICoherentDerivedUnit\",\n \"siunits.SINonCoherentDerivedUnit\",\n \"siunits.SISpecialUnit\",\n \"siunits.SICoherentUnit\",\n \"siunits.SIPrefixedUnit\",\n \"siunits.SIBaseUnit\",\n \"siunits.SIUnitSymbol\",\n \"siunits.SIUnit\",\n \"emmo.MultipleUnit\",\n \"emmo.SubMultipleUnit\",\n \"emmo.OffSystemUnit\",\n \"emmo.PrefixedUnit\",\n \"emmo.NonPrefixedUnit\",\n \"emmo.SpecialUnit\",\n \"emmo.DerivedUnit\",\n \"emmo.BaseUnit\",\n \"emmo.UnitSymbol\",\n \"emmo.SIAccepted\",\n \"emmo.SICoherentDerivedUnit\",\n \"emmo.SINonCoherentDerivedUnit\",\n \"emmo.SISpecialUnit\",\n \"emmo.SICoherentUnit\",\n \"emmo.SIPrefixedUnit\",\n \"emmo.SIBaseUnit\",\n \"emmo.SIUnitSymbol\",\n \"emmo.SIUnit\",\n )\n )\n if not hasattr(self.onto, \"MeasurementUnit\"):\n return\n exceptions.update(self.get_config(\"test_unit_dimension.exceptions\", ()))\n regex = re.compile(r\"^(emmo|metrology).hasDimensionString.value\\(.*\\)$\")\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.MeasurementUnit.descendants():\n if not self.check_imported and cls not in classes:\n continue\n # Assume that actual units are not subclassed\n if not list(cls.subclasses()) and repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertTrue(\n any(\n regex.match(repr(r))\n for r in cls.get_indirect_is_a()\n ),\n msg=cls,\n )\n\n def test_quantity_dimension_beta3(self):\n \"\"\"Check that all quantities have a physicalDimension annotation.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.PhysicoChemical\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Universal\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n regex = re.compile(\n \"^T([+-][1-9]|0) L([+-][1-9]|0) M([+-][1-9]|0) I([+-][1-9]|0) \"\n \"(H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) J([+-][1-9]|0)$\"\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n anno = cls.get_annotations()\n self.assertIn(\"physicalDimension\", anno, msg=cls)\n physdim = anno[\"physicalDimension\"].first()\n self.assertRegex(physdim, regex, msg=cls)\n\n def test_quantity_dimension(self):\n \"\"\"Check that all quantities have a physicalDimension.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n # pylint: disable=invalid-name\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.ISO80000Categorised\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.EquilibriumConstant\", # physical dimension may change\n \"emmo.Solubility\",\n \"emmo.Universal\",\n \"emmo.Intensive\",\n \"emmo.Extensive\",\n \"emmo.Concentration\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if issubclass(cls, self.onto.ISO80000Categorised):\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n for r in cls.get_indirect_is_a():\n if isinstance(r, owlready2.Restriction) and repr(\n r\n ).startswith(\"emmo.hasMeasurementUnit.some\"):\n self.assertTrue(\n issubclass(\n r.value,\n (\n self.onto.DimensionalUnit,\n self.onto.DimensionlessUnit,\n ),\n )\n )\n break\n else:\n self.assertTrue(\n issubclass(cls, self.onto.ISQDimensionlessQuantity)\n )\n\n def test_dimensional_unit(self):\n \"\"\"Check correct syntax of dimension string of dimensional units.\"\"\"\n\n # This test requires that the ontology has imported SIDimensionalUnit\n if \"SIDimensionalUnit\" not in self.onto:\n self.skipTest(\"SIDimensionalUnit is not imported\")\n\n # pylint: disable=invalid-name\n regex = re.compile(\n \"^T([+-][1-9][0-9]*|0) L([+-][1-9]|0) M([+-][1-9]|0) \"\n \"I([+-][1-9]|0) (H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) \"\n \"J([+-][1-9]|0)$\"\n )\n for cls in self.onto.SIDimensionalUnit.__subclasses__():\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertEqual(len(cls.equivalent_to), 1)\n r = cls.equivalent_to[0]\n self.assertIsInstance(r, owlready2.Restriction)\n self.assertRegex(r.value, regex)\n\n def test_physical_quantity_dimension(self):\n \"\"\"Check that all physical quantities have `hasPhysicalDimension`.\n\n Note: this test will fail before isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n\n \"\"\"\n exceptions = set(\n (\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclearPhysicsQuantity\",\n \"emmo.ThermodynamicalQuantity\",\n \"emmo.LightAndRadiationQuantity\",\n \"emmo.SpaceAndTimeQuantity\",\n \"emmo.AcousticQuantity\",\n \"emmo.PhysioChememicalQuantity\",\n \"emmo.ElectromagneticQuantity\",\n \"emmo.MechanicalQuantity\",\n \"emmo.CondensedMatterPhysicsQuantity\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Extensive\",\n \"emmo.Intensive\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_physical_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n try:\n class_props = cls.INDIRECT_get_class_properties()\n except AttributeError:\n # The INDIRECT_get_class_properties() method\n # does not support inverse properties. Build\n # class_props manually...\n class_props = set()\n for _ in cls.mro():\n if hasattr(_, \"is_a\"):\n class_props.update(\n [\n restriction.property\n for restriction in _.is_a\n if isinstance(\n restriction, owlready2.Restriction\n )\n ]\n )\n\n self.assertIn(\n self.onto.hasPhysicalDimension, class_props, msg=cls\n )\n\n def test_namespace(self):\n \"\"\"Check that all IRIs are namespaced after their (sub)ontology.\n\n Configurations:\n exceptions - full name of entities to ignore.\n \"\"\"\n exceptions = set(\n (\n \"owl.qualifiedCardinality\",\n \"owl.minQualifiedCardinality\",\n \"terms.creator\",\n \"terms.contributor\",\n \"terms.publisher\",\n \"terms.title\",\n \"terms.license\",\n \"terms.abstract\",\n \"core.prefLabel\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"mereotopology.Item\",\n \"manufacturing.EngineeredMaterial\",\n )\n )\n exceptions.update(self.get_config(\"test_namespace.exceptions\", ()))\n\n def checker(onto, ignore_namespace):\n if list(\n filter(onto.base_iri.strip(\"#\").endswith, self.ignore_namespace)\n ):\n print(f\"Skipping namespace: {onto.base_iri}\")\n return\n entities = itertools.chain(\n onto.classes(),\n onto.object_properties(),\n onto.data_properties(),\n onto.individuals(),\n onto.annotation_properties(),\n )\n for entity in entities:\n if entity not in visited and repr(entity) not in exceptions:\n visited.add(entity)\n with self.subTest(\n iri=entity.iri,\n base_iri=onto.base_iri,\n entity=repr(entity),\n ):\n self.assertTrue(\n entity.iri.endswith(entity.name),\n msg=(\n \"the final part of entity IRIs must be their \"\n \"name\"\n ),\n )\n self.assertEqual(\n entity.iri,\n onto.base_iri + entity.name,\n msg=(\n f\"IRI {entity.iri!r} does not correspond to \"\n f\"module namespace: {onto.base_iri!r}\"\n ),\n )\n\n if self.check_imported:\n for imp_onto in onto.imported_ontologies:\n if imp_onto not in visited_onto:\n visited_onto.add(imp_onto)\n checker(imp_onto, ignore_namespace)\n\n visited = set()\n visited_onto = set()\n checker(self.onto, self.ignore_namespace)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_dimensional_unit","title":"test_dimensional_unit(self)
","text":"Check correct syntax of dimension string of dimensional units.
Source code inemmopy/emmocheck.py
def test_dimensional_unit(self):\n \"\"\"Check correct syntax of dimension string of dimensional units.\"\"\"\n\n # This test requires that the ontology has imported SIDimensionalUnit\n if \"SIDimensionalUnit\" not in self.onto:\n self.skipTest(\"SIDimensionalUnit is not imported\")\n\n # pylint: disable=invalid-name\n regex = re.compile(\n \"^T([+-][1-9][0-9]*|0) L([+-][1-9]|0) M([+-][1-9]|0) \"\n \"I([+-][1-9]|0) (H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) \"\n \"J([+-][1-9]|0)$\"\n )\n for cls in self.onto.SIDimensionalUnit.__subclasses__():\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertEqual(len(cls.equivalent_to), 1)\n r = cls.equivalent_to[0]\n self.assertIsInstance(r, owlready2.Restriction)\n self.assertRegex(r.value, regex)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_namespace","title":"test_namespace(self)
","text":"Check that all IRIs are namespaced after their (sub)ontology.
Configurations
exceptions - full name of entities to ignore.
Source code inemmopy/emmocheck.py
def test_namespace(self):\n \"\"\"Check that all IRIs are namespaced after their (sub)ontology.\n\n Configurations:\n exceptions - full name of entities to ignore.\n \"\"\"\n exceptions = set(\n (\n \"owl.qualifiedCardinality\",\n \"owl.minQualifiedCardinality\",\n \"terms.creator\",\n \"terms.contributor\",\n \"terms.publisher\",\n \"terms.title\",\n \"terms.license\",\n \"terms.abstract\",\n \"core.prefLabel\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"mereotopology.Item\",\n \"manufacturing.EngineeredMaterial\",\n )\n )\n exceptions.update(self.get_config(\"test_namespace.exceptions\", ()))\n\n def checker(onto, ignore_namespace):\n if list(\n filter(onto.base_iri.strip(\"#\").endswith, self.ignore_namespace)\n ):\n print(f\"Skipping namespace: {onto.base_iri}\")\n return\n entities = itertools.chain(\n onto.classes(),\n onto.object_properties(),\n onto.data_properties(),\n onto.individuals(),\n onto.annotation_properties(),\n )\n for entity in entities:\n if entity not in visited and repr(entity) not in exceptions:\n visited.add(entity)\n with self.subTest(\n iri=entity.iri,\n base_iri=onto.base_iri,\n entity=repr(entity),\n ):\n self.assertTrue(\n entity.iri.endswith(entity.name),\n msg=(\n \"the final part of entity IRIs must be their \"\n \"name\"\n ),\n )\n self.assertEqual(\n entity.iri,\n onto.base_iri + entity.name,\n msg=(\n f\"IRI {entity.iri!r} does not correspond to \"\n f\"module namespace: {onto.base_iri!r}\"\n ),\n )\n\n if self.check_imported:\n for imp_onto in onto.imported_ontologies:\n if imp_onto not in visited_onto:\n visited_onto.add(imp_onto)\n checker(imp_onto, ignore_namespace)\n\n visited = set()\n visited_onto = set()\n checker(self.onto, self.ignore_namespace)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_physical_quantity_dimension","title":"test_physical_quantity_dimension(self)
","text":"Check that all physical quantities have hasPhysicalDimension
.
Note: this test will fail before isq is moved to emmo/domain.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_physical_quantity_dimension(self):\n \"\"\"Check that all physical quantities have `hasPhysicalDimension`.\n\n Note: this test will fail before isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n\n \"\"\"\n exceptions = set(\n (\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclearPhysicsQuantity\",\n \"emmo.ThermodynamicalQuantity\",\n \"emmo.LightAndRadiationQuantity\",\n \"emmo.SpaceAndTimeQuantity\",\n \"emmo.AcousticQuantity\",\n \"emmo.PhysioChememicalQuantity\",\n \"emmo.ElectromagneticQuantity\",\n \"emmo.MechanicalQuantity\",\n \"emmo.CondensedMatterPhysicsQuantity\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Extensive\",\n \"emmo.Intensive\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_physical_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n try:\n class_props = cls.INDIRECT_get_class_properties()\n except AttributeError:\n # The INDIRECT_get_class_properties() method\n # does not support inverse properties. Build\n # class_props manually...\n class_props = set()\n for _ in cls.mro():\n if hasattr(_, \"is_a\"):\n class_props.update(\n [\n restriction.property\n for restriction in _.is_a\n if isinstance(\n restriction, owlready2.Restriction\n )\n ]\n )\n\n self.assertIn(\n self.onto.hasPhysicalDimension, class_props, msg=cls\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_quantity_dimension","title":"test_quantity_dimension(self)
","text":"Check that all quantities have a physicalDimension.
Note: this test will be deprecated when isq is moved to emmo/domain.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_quantity_dimension(self):\n \"\"\"Check that all quantities have a physicalDimension.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n # pylint: disable=invalid-name\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.ISO80000Categorised\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.EquilibriumConstant\", # physical dimension may change\n \"emmo.Solubility\",\n \"emmo.Universal\",\n \"emmo.Intensive\",\n \"emmo.Extensive\",\n \"emmo.Concentration\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if issubclass(cls, self.onto.ISO80000Categorised):\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n for r in cls.get_indirect_is_a():\n if isinstance(r, owlready2.Restriction) and repr(\n r\n ).startswith(\"emmo.hasMeasurementUnit.some\"):\n self.assertTrue(\n issubclass(\n r.value,\n (\n self.onto.DimensionalUnit,\n self.onto.DimensionlessUnit,\n ),\n )\n )\n break\n else:\n self.assertTrue(\n issubclass(cls, self.onto.ISQDimensionlessQuantity)\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_quantity_dimension_beta3","title":"test_quantity_dimension_beta3(self)
","text":"Check that all quantities have a physicalDimension annotation.
Note: this test will be deprecated when isq is moved to emmo/domain.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_quantity_dimension_beta3(self):\n \"\"\"Check that all quantities have a physicalDimension annotation.\n\n Note: this test will be deprecated when isq is moved to emmo/domain.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"properties.ModelledQuantitativeProperty\",\n \"properties.MeasuredQuantitativeProperty\",\n \"properties.ConventionalQuantitativeProperty\",\n \"metrology.QuantitativeProperty\",\n \"metrology.Quantity\",\n \"metrology.OrdinalQuantity\",\n \"metrology.BaseQuantity\",\n \"metrology.PhysicalConstant\",\n \"metrology.PhysicalQuantity\",\n \"metrology.ExactConstant\",\n \"metrology.MeasuredConstant\",\n \"metrology.DerivedQuantity\",\n \"isq.ISQBaseQuantity\",\n \"isq.InternationalSystemOfQuantity\",\n \"isq.ISQDerivedQuantity\",\n \"isq.SIExactConstant\",\n \"emmo.ModelledQuantitativeProperty\",\n \"emmo.MeasuredQuantitativeProperty\",\n \"emmo.ConventionalQuantitativeProperty\",\n \"emmo.QuantitativeProperty\",\n \"emmo.Quantity\",\n \"emmo.OrdinalQuantity\",\n \"emmo.BaseQuantity\",\n \"emmo.PhysicalConstant\",\n \"emmo.PhysicalQuantity\",\n \"emmo.ExactConstant\",\n \"emmo.MeasuredConstant\",\n \"emmo.DerivedQuantity\",\n \"emmo.ISQBaseQuantity\",\n \"emmo.InternationalSystemOfQuantity\",\n \"emmo.ISQDerivedQuantity\",\n \"emmo.SIExactConstant\",\n \"emmo.NonSIUnits\",\n \"emmo.StandardizedPhysicalQuantity\",\n \"emmo.CategorizedPhysicalQuantity\",\n \"emmo.AtomicAndNuclear\",\n \"emmo.Defined\",\n \"emmo.Electromagnetic\",\n \"emmo.FrequentlyUsed\",\n \"emmo.PhysicoChemical\",\n \"emmo.ChemicalCompositionQuantity\",\n \"emmo.Universal\",\n )\n )\n if not hasattr(self.onto, \"PhysicalQuantity\"):\n return\n exceptions.update(\n self.get_config(\"test_quantity_dimension.exceptions\", ())\n )\n regex = re.compile(\n \"^T([+-][1-9]|0) L([+-][1-9]|0) M([+-][1-9]|0) I([+-][1-9]|0) \"\n \"(H|\u0398)([+-][1-9]|0) N([+-][1-9]|0) J([+-][1-9]|0)$\"\n )\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.PhysicalQuantity.descendants():\n if not self.check_imported and cls not in classes:\n continue\n if repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n anno = cls.get_annotations()\n self.assertIn(\"physicalDimension\", anno, msg=cls)\n physdim = anno[\"physicalDimension\"].first()\n self.assertRegex(physdim, regex, msg=cls)\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestFunctionalEMMOConventions.test_unit_dimension","title":"test_unit_dimension(self)
","text":"Check that all measurement units have a physical dimension.
Configurations
exceptions - full class names of classes to ignore.
Source code inemmopy/emmocheck.py
def test_unit_dimension(self):\n \"\"\"Check that all measurement units have a physical dimension.\n\n Configurations:\n exceptions - full class names of classes to ignore.\n \"\"\"\n exceptions = set(\n (\n \"metrology.MultipleUnit\",\n \"metrology.SubMultipleUnit\",\n \"metrology.OffSystemUnit\",\n \"metrology.PrefixedUnit\",\n \"metrology.NonPrefixedUnit\",\n \"metrology.SpecialUnit\",\n \"metrology.DerivedUnit\",\n \"metrology.BaseUnit\",\n \"metrology.UnitSymbol\",\n \"siunits.SICoherentDerivedUnit\",\n \"siunits.SINonCoherentDerivedUnit\",\n \"siunits.SISpecialUnit\",\n \"siunits.SICoherentUnit\",\n \"siunits.SIPrefixedUnit\",\n \"siunits.SIBaseUnit\",\n \"siunits.SIUnitSymbol\",\n \"siunits.SIUnit\",\n \"emmo.MultipleUnit\",\n \"emmo.SubMultipleUnit\",\n \"emmo.OffSystemUnit\",\n \"emmo.PrefixedUnit\",\n \"emmo.NonPrefixedUnit\",\n \"emmo.SpecialUnit\",\n \"emmo.DerivedUnit\",\n \"emmo.BaseUnit\",\n \"emmo.UnitSymbol\",\n \"emmo.SIAccepted\",\n \"emmo.SICoherentDerivedUnit\",\n \"emmo.SINonCoherentDerivedUnit\",\n \"emmo.SISpecialUnit\",\n \"emmo.SICoherentUnit\",\n \"emmo.SIPrefixedUnit\",\n \"emmo.SIBaseUnit\",\n \"emmo.SIUnitSymbol\",\n \"emmo.SIUnit\",\n )\n )\n if not hasattr(self.onto, \"MeasurementUnit\"):\n return\n exceptions.update(self.get_config(\"test_unit_dimension.exceptions\", ()))\n regex = re.compile(r\"^(emmo|metrology).hasDimensionString.value\\(.*\\)$\")\n classes = set(self.onto.classes(self.check_imported))\n for cls in self.onto.MeasurementUnit.descendants():\n if not self.check_imported and cls not in classes:\n continue\n # Assume that actual units are not subclassed\n if not list(cls.subclasses()) and repr(cls) not in exceptions:\n with self.subTest(cls=cls, label=get_label(cls)):\n self.assertTrue(\n any(\n regex.match(repr(r))\n for r in cls.get_indirect_is_a()\n ),\n msg=cls,\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions","title":" TestSyntacticEMMOConventions
","text":"Test syntactic EMMO conventions.
Source code inemmopy/emmocheck.py
class TestSyntacticEMMOConventions(TestEMMOConventions):\n \"\"\"Test syntactic EMMO conventions.\"\"\"\n\n def test_number_of_labels(self):\n \"\"\"Check that all entities have one and only one prefLabel.\n\n Use \"altLabel\" for synonyms.\n\n The only allowed exception is entities who's representation\n starts with \"owl.\".\n \"\"\"\n exceptions = set(\n (\n \"0.1.homepage\", # foaf:homepage\n \"0.1.logo\",\n \"0.1.page\",\n \"0.1.name\",\n \"bibo:doi\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"core.prefLabel\",\n \"terms.abstract\",\n \"terms.alternative\",\n \"terms:bibliographicCitation\",\n \"terms.contributor\",\n \"terms.created\",\n \"terms.creator\",\n \"terms.hasFormat\",\n \"terms.identifier\",\n \"terms.issued\",\n \"terms.license\",\n \"terms.modified\",\n \"terms.publisher\",\n \"terms.source\",\n \"terms.title\",\n \"vann:preferredNamespacePrefix\",\n \"vann:preferredNamespaceUri\",\n )\n )\n exceptions.update(\n self.get_config(\"test_number_of_labels.exceptions\", ())\n )\n if (\n \"prefLabel\"\n in self.onto.world._props # pylint: disable=protected-access\n ):\n for entity in self.onto.classes(self.check_imported):\n if repr(entity) not in exceptions:\n with self.subTest(\n entity=entity,\n label=get_label(entity),\n prefLabels=entity.prefLabel,\n ):\n if not repr(entity).startswith(\"owl.\"):\n self.assertTrue(hasattr(entity, \"prefLabel\"))\n self.assertEqual(1, len(entity.prefLabel))\n else:\n self.fail(\"ontology has no prefLabel\")\n\n def test_class_label(self):\n \"\"\"Check that class labels are CamelCase and valid identifiers.\n\n For CamelCase, we are currently only checking that the labels\n start with upper case.\n \"\"\"\n exceptions = set(\n (\n \"0-manifold\", # not needed in 1.0.0-beta\n \"1-manifold\",\n \"2-manifold\",\n \"3-manifold\",\n \"C++\",\n \"3DPrinting\",\n )\n )\n exceptions.update(self.get_config(\"test_class_label.exceptions\", ()))\n\n for cls in self.onto.classes(self.check_imported):\n for label in cls.label + getattr(cls, \"prefLabel\", []):\n if str(label) not in exceptions:\n with self.subTest(entity=cls, label=label):\n self.assertTrue(label.isidentifier())\n self.assertTrue(label[0].isupper())\n\n def test_object_property_label(self):\n \"\"\"Check that object property labels are lowerCamelCase.\n\n Allowed exceptions: \"EMMORelation\"\n\n If they start with \"has\" or \"is\" they should be followed by a\n upper case letter.\n\n If they start with \"is\" they should also end with \"Of\".\n \"\"\"\n exceptions = set((\"EMMORelation\",))\n exceptions.update(\n self.get_config(\"test_object_property_label.exceptions\", ())\n )\n\n for obj_prop in self.onto.object_properties():\n if repr(obj_prop) not in exceptions:\n for label in obj_prop.label:\n with self.subTest(entity=obj_prop, label=label):\n self.assertTrue(\n label[0].islower(), \"label start with lowercase\"\n )\n if label.startswith(\"has\"):\n self.assertTrue(\n label[3].isupper(),\n 'what follows \"has\" must be \"uppercase\"',\n )\n if label.startswith(\"is\"):\n self.assertTrue(\n label[2].isupper(),\n 'what follows \"is\" must be \"uppercase\"',\n )\n self.assertTrue(\n label.endswith((\"Of\", \"With\")),\n 'should end with \"Of\" or \"With\"',\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions.test_class_label","title":"test_class_label(self)
","text":"Check that class labels are CamelCase and valid identifiers.
For CamelCase, we are currently only checking that the labels start with upper case.
Source code inemmopy/emmocheck.py
def test_class_label(self):\n \"\"\"Check that class labels are CamelCase and valid identifiers.\n\n For CamelCase, we are currently only checking that the labels\n start with upper case.\n \"\"\"\n exceptions = set(\n (\n \"0-manifold\", # not needed in 1.0.0-beta\n \"1-manifold\",\n \"2-manifold\",\n \"3-manifold\",\n \"C++\",\n \"3DPrinting\",\n )\n )\n exceptions.update(self.get_config(\"test_class_label.exceptions\", ()))\n\n for cls in self.onto.classes(self.check_imported):\n for label in cls.label + getattr(cls, \"prefLabel\", []):\n if str(label) not in exceptions:\n with self.subTest(entity=cls, label=label):\n self.assertTrue(label.isidentifier())\n self.assertTrue(label[0].isupper())\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions.test_number_of_labels","title":"test_number_of_labels(self)
","text":"Check that all entities have one and only one prefLabel.
Use \"altLabel\" for synonyms.
The only allowed exception is entities who's representation starts with \"owl.\".
Source code inemmopy/emmocheck.py
def test_number_of_labels(self):\n \"\"\"Check that all entities have one and only one prefLabel.\n\n Use \"altLabel\" for synonyms.\n\n The only allowed exception is entities who's representation\n starts with \"owl.\".\n \"\"\"\n exceptions = set(\n (\n \"0.1.homepage\", # foaf:homepage\n \"0.1.logo\",\n \"0.1.page\",\n \"0.1.name\",\n \"bibo:doi\",\n \"core.altLabel\",\n \"core.hiddenLabel\",\n \"core.prefLabel\",\n \"terms.abstract\",\n \"terms.alternative\",\n \"terms:bibliographicCitation\",\n \"terms.contributor\",\n \"terms.created\",\n \"terms.creator\",\n \"terms.hasFormat\",\n \"terms.identifier\",\n \"terms.issued\",\n \"terms.license\",\n \"terms.modified\",\n \"terms.publisher\",\n \"terms.source\",\n \"terms.title\",\n \"vann:preferredNamespacePrefix\",\n \"vann:preferredNamespaceUri\",\n )\n )\n exceptions.update(\n self.get_config(\"test_number_of_labels.exceptions\", ())\n )\n if (\n \"prefLabel\"\n in self.onto.world._props # pylint: disable=protected-access\n ):\n for entity in self.onto.classes(self.check_imported):\n if repr(entity) not in exceptions:\n with self.subTest(\n entity=entity,\n label=get_label(entity),\n prefLabels=entity.prefLabel,\n ):\n if not repr(entity).startswith(\"owl.\"):\n self.assertTrue(hasattr(entity, \"prefLabel\"))\n self.assertEqual(1, len(entity.prefLabel))\n else:\n self.fail(\"ontology has no prefLabel\")\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.TestSyntacticEMMOConventions.test_object_property_label","title":"test_object_property_label(self)
","text":"Check that object property labels are lowerCamelCase.
Allowed exceptions: \"EMMORelation\"
If they start with \"has\" or \"is\" they should be followed by a upper case letter.
If they start with \"is\" they should also end with \"Of\".
Source code inemmopy/emmocheck.py
def test_object_property_label(self):\n \"\"\"Check that object property labels are lowerCamelCase.\n\n Allowed exceptions: \"EMMORelation\"\n\n If they start with \"has\" or \"is\" they should be followed by a\n upper case letter.\n\n If they start with \"is\" they should also end with \"Of\".\n \"\"\"\n exceptions = set((\"EMMORelation\",))\n exceptions.update(\n self.get_config(\"test_object_property_label.exceptions\", ())\n )\n\n for obj_prop in self.onto.object_properties():\n if repr(obj_prop) not in exceptions:\n for label in obj_prop.label:\n with self.subTest(entity=obj_prop, label=label):\n self.assertTrue(\n label[0].islower(), \"label start with lowercase\"\n )\n if label.startswith(\"has\"):\n self.assertTrue(\n label[3].isupper(),\n 'what follows \"has\" must be \"uppercase\"',\n )\n if label.startswith(\"is\"):\n self.assertTrue(\n label[2].isupper(),\n 'what follows \"is\" must be \"uppercase\"',\n )\n self.assertTrue(\n label.endswith((\"Of\", \"With\")),\n 'should end with \"Of\" or \"With\"',\n )\n
"},{"location":"api_reference/emmopy/emmocheck/#emmopy.emmocheck.main","title":"main(argv=None)
","text":"Run all checks on ontology iri
.
Default is 'http://emmo.info/emmo'.
Parameters:
Name Type Description Defaultargv
list
List of arguments, similar to sys.argv[1:]
. Mainly for testing purposes, since it allows one to invoke the tool manually / through Python.
None
Source code in emmopy/emmocheck.py
def main(\n argv: list = None,\n): # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n \"\"\"Run all checks on ontology `iri`.\n\n Default is 'http://emmo.info/emmo'.\n\n Parameters:\n argv: List of arguments, similar to `sys.argv[1:]`.\n Mainly for testing purposes, since it allows one to invoke the tool\n manually / through Python.\n\n \"\"\"\n parser = argparse.ArgumentParser(description=__doc__)\n parser.add_argument(\"iri\", help=\"File name or URI to the ontology to test.\")\n parser.add_argument(\n \"--database\",\n \"-d\",\n metavar=\"FILENAME\",\n default=\":memory:\",\n help=(\n \"Load ontology from Owlready2 sqlite3 database. The `iri` argument\"\n \" should in this case be the IRI of the ontology you want to \"\n \"check.\"\n ),\n )\n parser.add_argument(\n \"--local\",\n \"-l\",\n action=\"store_true\",\n help=(\n \"Load imported ontologies locally. Their paths are specified in \"\n \"Prot\u00e8g\u00e8 catalog files or via the --path option. The IRI should \"\n \"be a file name.\"\n ),\n )\n parser.add_argument(\n \"--catalog-file\",\n default=\"catalog-v001.xml\",\n help=(\n \"Name of Prot\u00e8g\u00e8 catalog file in the same folder as the ontology. \"\n \"This option is used together with --local and defaults to \"\n '\"catalog-v001.xml\".'\n ),\n )\n parser.add_argument(\n \"--path\",\n action=\"append\",\n default=[],\n help=(\n \"Paths where imported ontologies can be found. May be provided as \"\n \"a comma-separated string and/or with multiple --path options.\"\n ),\n )\n parser.add_argument(\n \"--check-imported\",\n \"-i\",\n action=\"store_true\",\n help=\"Whether to check imported ontologies.\",\n )\n parser.add_argument(\n \"--verbose\", \"-v\", action=\"store_true\", help=\"Verbosity level.\"\n )\n parser.add_argument(\n \"--configfile\",\n \"-c\",\n help=\"A yaml file with additional test configurations.\",\n )\n parser.add_argument(\n \"--skip\",\n \"-s\",\n action=\"append\",\n default=[],\n help=(\n \"Shell pattern matching tests to skip. This option may be \"\n \"provided multiple times.\"\n ),\n )\n parser.add_argument(\n \"--enable\",\n \"-e\",\n action=\"append\",\n default=[],\n help=(\n \"Shell pattern matching tests to enable that have been skipped by \"\n \"default or in the config file. This option may be provided \"\n \"multiple times.\"\n ),\n )\n parser.add_argument( # deprecated, replaced by --no-catalog\n \"--url-from-catalog\",\n \"-u\",\n default=None,\n action=\"store_true\",\n help=\"Get url from catalog file\",\n )\n parser.add_argument(\n \"--no-catalog\",\n action=\"store_false\",\n dest=\"url_from_catalog\",\n default=None,\n help=\"Whether to not read catalog file even if it exists.\",\n )\n parser.add_argument(\n \"--ignore-namespace\",\n \"-n\",\n action=\"append\",\n default=[],\n help=\"Namespace to be ignored. Can be given multiple times\",\n )\n\n # Options to pass forward to unittest\n parser.add_argument(\n \"--buffer\",\n \"-b\",\n dest=\"unittest\",\n action=\"append_const\",\n const=\"-b\",\n help=(\n \"The standard output and standard error streams are buffered \"\n \"during the test run. Output during a passing test is discarded. \"\n \"Output is echoed normally on test fail or error and is added to \"\n \"the failure messages.\"\n ),\n )\n parser.add_argument(\n \"--catch\",\n dest=\"unittest\",\n action=\"append_const\",\n const=\"-c\",\n help=(\n \"Control-C during the test run waits for the current test to end \"\n \"and then reports all the results so far. A second control-C \"\n \"raises the normal KeyboardInterrupt exception\"\n ),\n )\n parser.add_argument(\n \"--failfast\",\n \"-f\",\n dest=\"unittest\",\n action=\"append_const\",\n const=\"-f\",\n help=\"Stop the test run on the first error or failure.\",\n )\n try:\n args = parser.parse_args(args=argv)\n sys.argv[1:] = args.unittest if args.unittest else []\n if args.verbose:\n sys.argv.append(\"-v\")\n except SystemExit as exc:\n sys.exit(exc.code) # Exit without traceback on invalid arguments\n\n # Append to onto_path\n for paths in args.path:\n for path in paths.split(\",\"):\n if path not in onto_path:\n onto_path.append(path)\n\n # Load ontology\n world = World(filename=args.database)\n if args.database != \":memory:\" and args.iri not in world.ontologies:\n parser.error(\n \"The IRI argument should be one of the ontologies in \"\n \"the database:\\n \" + \"\\n \".join(world.ontologies.keys())\n )\n\n onto = world.get_ontology(args.iri)\n onto.load(\n only_local=args.local,\n url_from_catalog=args.url_from_catalog,\n catalog_file=args.catalog_file,\n )\n\n # Store settings TestEMMOConventions\n TestEMMOConventions.onto = onto\n TestEMMOConventions.check_imported = args.check_imported\n TestEMMOConventions.ignore_namespace = args.ignore_namespace\n\n # Configure tests\n verbosity = 2 if args.verbose else 1\n if args.configfile:\n import yaml # pylint: disable=import-outside-toplevel\n\n with open(args.configfile, \"rt\") as handle:\n TestEMMOConventions.config.update(\n yaml.load(handle, Loader=yaml.SafeLoader)\n )\n\n # Run all subclasses of TestEMMOConventions as test suites\n status = 0\n for cls in TestEMMOConventions.__subclasses__():\n # pylint: disable=cell-var-from-loop,undefined-loop-variable\n\n suite = unittest.TestLoader().loadTestsFromTestCase(cls)\n\n # Mark tests to be skipped\n for test in suite:\n name = test.id().split(\".\")[-1]\n skipped = set( # skipped by default\n [\n \"test_namespace\",\n \"test_physical_quantity_dimension_annotation\",\n \"test_quantity_dimension_beta3\",\n \"test_physical_quantity_dimension\",\n ]\n )\n msg = {name: \"skipped by default\" for name in skipped}\n\n # enable/skip tests from config file\n for pattern in test.get_config(\"enable\", ()):\n if fnmatch.fnmatchcase(name, pattern):\n skipped.remove(name)\n for pattern in test.get_config(\"skip\", ()):\n if fnmatch.fnmatchcase(name, pattern):\n skipped.add(name)\n msg[name] = \"skipped from config file\"\n\n # enable/skip from command line\n for pattern in args.enable:\n if fnmatch.fnmatchcase(name, pattern):\n skipped.remove(name)\n for pattern in args.skip:\n if fnmatch.fnmatchcase(name, pattern):\n skipped.add(name)\n msg[name] = \"skipped from command line\"\n\n if name in skipped:\n setattr(test, \"setUp\", lambda: test.skipTest(msg.get(name, \"\")))\n\n runner = TextTestRunner(verbosity=verbosity)\n runner.resultclass.checkmode = True\n result = runner.run(suite)\n if result.failures:\n status = 1\n\n return status\n
"},{"location":"api_reference/emmopy/emmopy/","title":"emmopy","text":""},{"location":"api_reference/emmopy/emmopy/#emmopy.emmopy--emmopyemmopy","title":"emmopy.emmopy
","text":"Automagically retrieve the EMMO utilizing ontopy.get_ontology
.
get_emmo(inferred=True)
","text":"Returns the current version of emmo.
Parameters:
Name Type Description Defaultinferred
Optional[bool]
Whether to import the inferred version of emmo or not. Default is True.
True
Returns:
Type DescriptionOntology
The loaded emmo ontology.
Source code inemmopy/emmopy.py
def get_emmo(inferred: Optional[bool] = True) -> \"Ontology\":\n \"\"\"Returns the current version of emmo.\n\n Args:\n inferred: Whether to import the inferred version of emmo or not.\n Default is True.\n\n Returns:\n The loaded emmo ontology.\n\n \"\"\"\n name = \"emmo-inferred\" if inferred in [True, None] else \"emmo\"\n return get_ontology(name).load(prefix_emmo=True)\n
"},{"location":"api_reference/ontopy/colortest/","title":"colortest","text":""},{"location":"api_reference/ontopy/colortest/#ontopy.colortest--ontopycolortest","title":"ontopy.colortest
","text":"Print tests in colors.
Adapted from https://github.com/meshy/colour-runner by Charlie Denton License: MIT
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult","title":" ColourTextTestResult (TestResult)
","text":"A test result class that prints colour formatted text results to a stream.
Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py
Source code inontopy/colortest.py
class ColourTextTestResult(TestResult):\n \"\"\"\n A test result class that prints colour formatted text results to a stream.\n\n Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py\n \"\"\"\n\n formatter = formatters.Terminal256Formatter() # pylint: disable=no-member\n lexer = Lexer()\n separator1 = \"=\" * 70\n separator2 = \"-\" * 70\n indent = \" \" * 4\n # if `checkmode` is true, simplified output will be generated with\n # no traceback\n checkmode = False\n _terminal = Terminal()\n colours = {\n None: str,\n \"error\": _terminal.bold_red,\n \"expected\": _terminal.blue,\n # \"fail\": _terminal.bold_yellow,\n \"fail\": _terminal.bold_magenta,\n \"skip\": str,\n \"success\": _terminal.green,\n \"title\": _terminal.blue,\n \"unexpected\": _terminal.bold_red,\n }\n\n _test_class = None\n\n def __init__(self, stream, descriptions, verbosity):\n super().__init__(stream, descriptions, verbosity)\n self.stream = stream\n self.show_all = verbosity > 1\n self.dots = verbosity == 1\n self.descriptions = descriptions\n\n def getShortDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return self.indent + doc_first_line\n return self.indent + test._testMethodName\n\n def getLongDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return \"\\n\".join((str(test), doc_first_line))\n return str(test)\n\n def getClassDescription(self, test):\n test_class = test.__class__\n doc = test_class.__doc__\n if self.descriptions and doc:\n return doc.split(\"\\n\")[0].strip()\n return strclass(test_class)\n\n def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n\n def printResult(self, short, extended, colour_key=None):\n colour = self.colours[colour_key]\n if self.show_all:\n self.stream.writeln(colour(extended))\n elif self.dots:\n self.stream.write(colour(short))\n self.stream.flush()\n\n def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n\n def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n\n def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n\n def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n\n def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n\n def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n\n def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n\n def printErrorList(self, flavour, errors):\n colour = self.colours[flavour.lower()]\n\n for test, err in errors:\n if self.checkmode and flavour == \"FAIL\":\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {test.shortDescription()}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(str(test))\n if self.show_all:\n self.stream.writeln(self.separator2)\n lines = str(err).split(\"\\n\")\n i = 1\n for line in lines[1:]:\n if line.startswith(\" \"):\n i += 1\n else:\n break\n self.stream.writeln(\n highlight(\n \"\\n\".join(lines[i:]), self.lexer, self.formatter\n )\n )\n else:\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {self.getLongDescription(test)}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(self.separator2)\n self.stream.writeln(highlight(err, self.lexer, self.formatter))\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addError","title":"addError(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addExpectedFailure","title":"addExpectedFailure(self, test, err)
","text":"Called when an expected failure/error occurred.
Source code inontopy/colortest.py
def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addFailure","title":"addFailure(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addSkip","title":"addSkip(self, test, reason)
","text":"Called when a test is skipped.
Source code inontopy/colortest.py
def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addSuccess","title":"addSuccess(self, test)
","text":"Called when a test has completed successfully
Source code inontopy/colortest.py
def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.addUnexpectedSuccess","title":"addUnexpectedSuccess(self, test)
","text":"Called when a test was expected to fail, but succeed.
Source code inontopy/colortest.py
def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.printErrors","title":"printErrors(self)
","text":"Called by TestRunner after test run
Source code inontopy/colortest.py
def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestResult.startTest","title":"startTest(self, test)
","text":"Called when the given test is about to be run
Source code inontopy/colortest.py
def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner","title":" ColourTextTestRunner (TextTestRunner)
","text":"A test runner that uses colour in its output.
Source code inontopy/colortest.py
class ColourTextTestRunner(\n TextTestRunner\n): # pylint: disable=too-few-public-methods\n \"\"\"A test runner that uses colour in its output.\"\"\"\n\n resultclass = ColourTextTestResult\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass","title":" resultclass (TestResult)
","text":"A test result class that prints colour formatted text results to a stream.
Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py
Source code inontopy/colortest.py
class ColourTextTestResult(TestResult):\n \"\"\"\n A test result class that prints colour formatted text results to a stream.\n\n Based on https://github.com/python/cpython/blob/3.3/Lib/unittest/runner.py\n \"\"\"\n\n formatter = formatters.Terminal256Formatter() # pylint: disable=no-member\n lexer = Lexer()\n separator1 = \"=\" * 70\n separator2 = \"-\" * 70\n indent = \" \" * 4\n # if `checkmode` is true, simplified output will be generated with\n # no traceback\n checkmode = False\n _terminal = Terminal()\n colours = {\n None: str,\n \"error\": _terminal.bold_red,\n \"expected\": _terminal.blue,\n # \"fail\": _terminal.bold_yellow,\n \"fail\": _terminal.bold_magenta,\n \"skip\": str,\n \"success\": _terminal.green,\n \"title\": _terminal.blue,\n \"unexpected\": _terminal.bold_red,\n }\n\n _test_class = None\n\n def __init__(self, stream, descriptions, verbosity):\n super().__init__(stream, descriptions, verbosity)\n self.stream = stream\n self.show_all = verbosity > 1\n self.dots = verbosity == 1\n self.descriptions = descriptions\n\n def getShortDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return self.indent + doc_first_line\n return self.indent + test._testMethodName\n\n def getLongDescription(self, test):\n doc_first_line = test.shortDescription()\n if self.descriptions and doc_first_line:\n return \"\\n\".join((str(test), doc_first_line))\n return str(test)\n\n def getClassDescription(self, test):\n test_class = test.__class__\n doc = test_class.__doc__\n if self.descriptions and doc:\n return doc.split(\"\\n\")[0].strip()\n return strclass(test_class)\n\n def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n\n def printResult(self, short, extended, colour_key=None):\n colour = self.colours[colour_key]\n if self.show_all:\n self.stream.writeln(colour(extended))\n elif self.dots:\n self.stream.write(colour(short))\n self.stream.flush()\n\n def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n\n def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n\n def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n\n def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n\n def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n\n def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n\n def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n\n def printErrorList(self, flavour, errors):\n colour = self.colours[flavour.lower()]\n\n for test, err in errors:\n if self.checkmode and flavour == \"FAIL\":\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {test.shortDescription()}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(str(test))\n if self.show_all:\n self.stream.writeln(self.separator2)\n lines = str(err).split(\"\\n\")\n i = 1\n for line in lines[1:]:\n if line.startswith(\" \"):\n i += 1\n else:\n break\n self.stream.writeln(\n highlight(\n \"\\n\".join(lines[i:]), self.lexer, self.formatter\n )\n )\n else:\n self.stream.writeln(self.separator1)\n title = f\"{flavour}: {self.getLongDescription(test)}\"\n self.stream.writeln(colour(title))\n self.stream.writeln(self.separator2)\n self.stream.writeln(highlight(err, self.lexer, self.formatter))\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addError","title":"addError(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addError(self, test, err):\n super().addError(test, err)\n self.printResult(\"E\", \"ERROR\", \"error\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addExpectedFailure","title":"addExpectedFailure(self, test, err)
","text":"Called when an expected failure/error occurred.
Source code inontopy/colortest.py
def addExpectedFailure(self, test, err):\n super().addExpectedFailure(test, err)\n self.printResult(\"x\", \"expected failure\", \"expected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addFailure","title":"addFailure(self, test, err)
","text":"Called when an error has occurred. 'err' is a tuple of values as returned by sys.exc_info().
Source code inontopy/colortest.py
def addFailure(self, test, err):\n super().addFailure(test, err)\n self.printResult(\"F\", \"FAIL\", \"fail\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addSkip","title":"addSkip(self, test, reason)
","text":"Called when a test is skipped.
Source code inontopy/colortest.py
def addSkip(self, test, reason):\n super().addSkip(test, reason)\n if self.checkmode:\n self.printResult(\"s\", \"skipped\", \"skip\")\n else:\n self.printResult(\"s\", f\"skipped {reason!r}\", \"skip\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addSuccess","title":"addSuccess(self, test)
","text":"Called when a test has completed successfully
Source code inontopy/colortest.py
def addSuccess(self, test):\n super().addSuccess(test)\n self.printResult(\".\", \"ok\", \"success\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.addUnexpectedSuccess","title":"addUnexpectedSuccess(self, test)
","text":"Called when a test was expected to fail, but succeed.
Source code inontopy/colortest.py
def addUnexpectedSuccess(self, test):\n super().addUnexpectedSuccess(test)\n self.printResult(\"u\", \"unexpected success\", \"unexpected\")\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.printErrors","title":"printErrors(self)
","text":"Called by TestRunner after test run
Source code inontopy/colortest.py
def printErrors(self):\n if self.dots or self.show_all:\n self.stream.writeln()\n self.printErrorList(\"ERROR\", self.errors)\n self.printErrorList(\"FAIL\", self.failures)\n
"},{"location":"api_reference/ontopy/colortest/#ontopy.colortest.ColourTextTestRunner.resultclass.startTest","title":"startTest(self, test)
","text":"Called when the given test is about to be run
Source code inontopy/colortest.py
def startTest(self, test):\n super().startTest(test)\n pos = 0\n if self.show_all:\n if self._test_class != test.__class__:\n self._test_class = test.__class__\n title = self.getClassDescription(test)\n self.stream.writeln(self.colours[\"title\"](title))\n descr = self.getShortDescription(test)\n self.stream.write(descr)\n pos += len(descr)\n self.stream.write(\" \" * (70 - pos))\n # self.stream.write(' ' * (self._terminal.width - 10 - pos))\n # self.stream.write(' ... ')\n self.stream.flush()\n
"},{"location":"api_reference/ontopy/excelparser/","title":"excelparser","text":"Module from parsing an excelfile and creating an ontology from it.
The excelfile is read by pandas and the pandas dataframe should have column names: prefLabel, altLabel, Elucidation, Comments, Examples, subClassOf, Relations.
Note that correct case is mandatory.
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.ExcelError","title":" ExcelError (EMMOntoPyException)
","text":"Raised on errors in Excel file.
Source code inontopy/excelparser.py
class ExcelError(EMMOntoPyException):\n \"\"\"Raised on errors in Excel file.\"\"\"\n
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.create_ontology_from_excel","title":"create_ontology_from_excel(excelpath, concept_sheet_name='Concepts', metadata_sheet_name='Metadata', imports_sheet_name='ImportedOntologies', dataproperties_sheet_name='DataProperties', objectproperties_sheet_name='ObjectProperties', annotationproperties_sheet_name='AnnotationProperties', base_iri='http://emmo.info/emmo/domain/onto#', base_iri_from_metadata=True, imports=None, catalog=None, force=False, input_ontology=None)
","text":"Creates an ontology from an Excel-file.
Parameters:
Name Type Description Defaultexcelpath
str
Path to Excel workbook.
requiredconcept_sheet_name
str
Name of sheet where concepts are defined. The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel','altLabel', 'Elucidation', 'Comments', 'Examples', 'subClassOf', 'Relations'. Multiple entries are separated with ';'.
'Concepts'
metadata_sheet_name
str
Name of sheet where metadata are defined. The first row contains column names 'Metadata name' and 'Value' Supported 'Metadata names' are: 'Ontology IRI', 'Ontology vesion IRI', 'Ontology version Info', 'Title', 'Abstract', 'License', 'Comment', 'Author', 'Contributor'. Multiple entries are separated with a semi-colon (;
).
'Metadata'
imports_sheet_name
str
Name of sheet where imported ontologies are defined. Column name is 'Imported ontologies'. Fully resolvable URL or path to imported ontologies provided one per row.
'ImportedOntologies'
dataproperties_sheet_name
str
Name of sheet where data properties are defined. The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel','altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf', 'Domain', 'Range', 'dijointWith', 'equivalentTo'.
'DataProperties'
annotationproperties_sheet_name
str
Name of sheet where annotation properties are defined. The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel', 'altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf', 'Domain', 'Range'.
'AnnotationProperties'
objectproperties_sheet_name
str
Name of sheet where object properties are defined.The second row of this sheet should contain column names that are supported. Currently these are 'prefLabel','altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf', 'Domain', 'Range', 'inverseOf', 'dijointWith', 'equivalentTo'.
'ObjectProperties'
base_iri
str
Base IRI of the new ontology.
'http://emmo.info/emmo/domain/onto#'
base_iri_from_metadata
bool
Whether to use base IRI defined from metadata.
True
imports
list
List of imported ontologies.
None
catalog
dict
Imported ontologies with (name, full path) key/value-pairs.
None
force
bool
Forcibly make an ontology by skipping concepts that are erroneously defined or other errors in the excel sheet.
False
input_ontology
Optional[ontopy.ontology.Ontology]
Ontology that should be updated. Default is None, which means that a completely new ontology is generated. If an input_ontology to be updated is provided, the metadata sheet in the excel sheet will not be considered.
None
Returns:
Type DescriptionA tuple with the
a dictionary with lists of concepts that raise errors, with the following keys:
ontopy/excelparser.py
def create_ontology_from_excel( # pylint: disable=too-many-arguments, too-many-locals\n excelpath: str,\n concept_sheet_name: str = \"Concepts\",\n metadata_sheet_name: str = \"Metadata\",\n imports_sheet_name: str = \"ImportedOntologies\",\n dataproperties_sheet_name: str = \"DataProperties\",\n objectproperties_sheet_name: str = \"ObjectProperties\",\n annotationproperties_sheet_name: str = \"AnnotationProperties\",\n base_iri: str = \"http://emmo.info/emmo/domain/onto#\",\n base_iri_from_metadata: bool = True,\n imports: list = None,\n catalog: dict = None,\n force: bool = False,\n input_ontology: Union[ontopy.ontology.Ontology, None] = None,\n) -> Tuple[ontopy.ontology.Ontology, dict, dict]:\n \"\"\"\n Creates an ontology from an Excel-file.\n\n Arguments:\n excelpath: Path to Excel workbook.\n concept_sheet_name: Name of sheet where concepts are defined.\n The second row of this sheet should contain column names that are\n supported. Currently these are 'prefLabel','altLabel',\n 'Elucidation', 'Comments', 'Examples', 'subClassOf', 'Relations'.\n Multiple entries are separated with ';'.\n metadata_sheet_name: Name of sheet where metadata are defined.\n The first row contains column names 'Metadata name' and 'Value'\n Supported 'Metadata names' are: 'Ontology IRI',\n 'Ontology vesion IRI', 'Ontology version Info', 'Title',\n 'Abstract', 'License', 'Comment', 'Author', 'Contributor'.\n Multiple entries are separated with a semi-colon (`;`).\n imports_sheet_name: Name of sheet where imported ontologies are\n defined.\n Column name is 'Imported ontologies'.\n Fully resolvable URL or path to imported ontologies provided one\n per row.\n dataproperties_sheet_name: Name of sheet where data properties are\n defined. The second row of this sheet should contain column names\n that are supported. Currently these are 'prefLabel','altLabel',\n 'Elucidation', 'Comments', 'Examples', 'subPropertyOf',\n 'Domain', 'Range', 'dijointWith', 'equivalentTo'.\n annotationproperties_sheet_name: Name of sheet where annotation\n properties are defined. The second row of this sheet should contain\n column names that are supported. Currently these are 'prefLabel',\n 'altLabel', 'Elucidation', 'Comments', 'Examples', 'subPropertyOf',\n 'Domain', 'Range'.\n objectproperties_sheet_name: Name of sheet where object properties are\n defined.The second row of this sheet should contain column names\n that are supported. Currently these are 'prefLabel','altLabel',\n 'Elucidation', 'Comments', 'Examples', 'subPropertyOf',\n 'Domain', 'Range', 'inverseOf', 'dijointWith', 'equivalentTo'.\n base_iri: Base IRI of the new ontology.\n base_iri_from_metadata: Whether to use base IRI defined from metadata.\n imports: List of imported ontologies.\n catalog: Imported ontologies with (name, full path) key/value-pairs.\n force: Forcibly make an ontology by skipping concepts\n that are erroneously defined or other errors in the excel sheet.\n input_ontology: Ontology that should be updated.\n Default is None,\n which means that a completely new ontology is generated.\n If an input_ontology to be updated is provided,\n the metadata sheet in the excel sheet will not be considered.\n\n\n Returns:\n A tuple with the:\n\n * created ontology\n * associated catalog of ontology names and resolvable path as dict\n * a dictionary with lists of concepts that raise errors, with the\n following keys:\n\n - \"already_defined\": These are concepts (classes)\n that are already in the\n ontology, because they were already added in a\n previous line of the excelfile/pandas dataframe, or because\n it is already defined in an imported ontology with the same\n base_iri as the newly created ontology.\n - \"in_imported_ontologies\": Concepts (classes)\n that are defined in the\n excel, but already exist in the imported ontologies.\n - \"wrongly_defined\": Concepts (classes) that are given an\n invalid prefLabel (e.g. with a space in the name).\n - \"missing_subClassOf\": Concepts (classes) that are missing\n parents. These concepts are added directly under owl:Thing.\n - \"invalid_subClassOf\": Concepts (classes) with invalidly\n defined parents.\n These concepts are added directly under owl:Thing.\n - \"nonadded_concepts\": List of all concepts (classes) that are\n not added,\n either because the prefLabel is invalid, or because the\n concept has already been added once or already exists in an\n imported ontology.\n - \"obj_prop_already_defined\": Object properties that are already\n defined in the ontology.\n - \"obj_prop_in_imported_ontologies\": Object properties that are\n defined in the excel, but already exist in the imported\n ontologies.\n - \"obj_prop_wrongly_defined\": Object properties that are given\n an invalid prefLabel (e.g. with a space in the name).\n - \"obj_prop_missing_subPropertyOf\": Object properties that are\n missing parents.\n - \"obj_prop_invalid_subPropertyOf\": Object properties with\n invalidly defined parents.\n - \"obj_prop_nonadded_entities\": List of all object properties\n that are not added, either because the prefLabel is invalid,\n or because the concept has already been added once or\n already exists in an imported ontology.\n - \"obj_prop_errors_in_properties\": Object properties with\n invalidly defined properties.\n - \"obj_prop_errors_in_range\": Object properties with invalidly\n defined range.\n - \"obj_prop_errors_in_domain\": Object properties with invalidly\n defined domain.\n - \"annot_prop_already_defined\": Annotation properties that are\n already defined in the ontology.\n - \"annot_prop_in_imported_ontologies\": Annotation properties\n that\n are defined in the excel, but already exist in the imported\n ontologies.\n - \"annot_prop_wrongly_defined\": Annotation properties that are\n given an invalid prefLabel (e.g. with a space in the name).\n - \"annot_prop_missing_subPropertyOf\": Annotation properties that\n are missing parents.\n - \"annot_prop_invalid_subPropertyOf\": Annotation properties with\n invalidly defined parents.\n - \"annot_prop_nonadded_entities\": List of all annotation\n properties that are not added, either because the prefLabel\n is invalid, or because the concept has already been added\n once or already exists in an imported ontology.\n - \"annot_prop_errors_in_properties\": Annotation properties with\n invalidly defined properties.\n - \"data_prop_already_defined\": Data properties that are already\n defined in the ontology.\n - \"data_prop_in_imported_ontologies\": Data properties that are\n defined in the excel, but already exist in the imported\n ontologies.\n - \"data_prop_wrongly_defined\": Data properties that are given\n an invalid prefLabel (e.g. with a space in the name).\n - \"data_prop_missing_subPropertyOf\": Data properties that are\n missing parents.\n - \"data_prop_invalid_subPropertyOf\": Data properties with\n invalidly defined parents.\n - \"data_prop_nonadded_entities\": List of all data properties\n that are not added, either because the prefLabel is invalid,\n or because the concept has already been added once or\n already exists in an imported ontology.\n - \"data_prop_errors_in_properties\": Data properties with\n invalidly defined properties.\n - \"data_prop_errors_in_range\": Data properties with invalidly\n defined range.\n - \"data_prop_errors_in_domain\": Data properties with invalidly\n defined domain.\n\n \"\"\"\n web_protocol = \"http://\", \"https://\", \"ftp://\"\n\n def _relative_to_absolute_paths(path):\n if isinstance(path, str):\n if not path.startswith(web_protocol):\n path = os.path.dirname(excelpath) + \"/\" + str(path)\n return path\n\n try:\n imports = pd.read_excel(\n excelpath, sheet_name=imports_sheet_name, skiprows=[1]\n )\n except ValueError:\n imports = pd.DataFrame()\n else:\n # Strip leading and trailing white spaces in paths\n imports.replace(r\"^\\s+\", \"\", regex=True).replace(\n r\"\\s+$\", \"\", regex=True\n )\n # Set empty strings to nan\n imports = imports.replace(r\"^\\s*$\", np.nan, regex=True)\n if \"Imported ontologies\" in imports.columns:\n imports[\"Imported ontologies\"] = imports[\n \"Imported ontologies\"\n ].apply(_relative_to_absolute_paths)\n\n # Read datafile TODO: Some magic to identify the header row\n conceptdata = pd.read_excel(\n excelpath, sheet_name=concept_sheet_name, skiprows=[0, 2]\n )\n try:\n objectproperties = pd.read_excel(\n excelpath, sheet_name=objectproperties_sheet_name, skiprows=[0, 2]\n )\n if \"prefLabel\" not in objectproperties.columns:\n warnings.warn(\n \"The 'prefLabel' column is missing in \"\n f\"{objectproperties_sheet_name}. \"\n \"New object properties will not be added to the ontology.\"\n )\n objectproperties = None\n except ValueError:\n warnings.warn(\n f\"No sheet named {objectproperties_sheet_name} found \"\n f\"in {excelpath}. \"\n \"New object properties will not be added to the ontology.\"\n )\n objectproperties = None\n try:\n annotationproperties = pd.read_excel(\n excelpath,\n sheet_name=annotationproperties_sheet_name,\n skiprows=[0, 2],\n )\n if \"prefLabel\" not in annotationproperties.columns:\n warnings.warn(\n \"The 'prefLabel' column is missing in \"\n f\"{annotationproperties_sheet_name}. \"\n \"New annotation properties will not be added to the ontology.\"\n )\n annotationproperties = None\n except ValueError:\n warnings.warn(\n f\"No sheet named {annotationproperties_sheet_name} \"\n f\"found in {excelpath}. \"\n \"New annotation properties will not be added to the ontology.\"\n )\n annotationproperties = None\n\n try:\n dataproperties = pd.read_excel(\n excelpath, sheet_name=dataproperties_sheet_name, skiprows=[0, 2]\n )\n if \"prefLabel\" not in dataproperties.columns:\n warnings.warn(\n \"The 'prefLabel' column is missing in \"\n f\"{dataproperties_sheet_name}. \"\n \"New data properties will not be added to the ontology.\"\n )\n dataproperties = None\n except ValueError:\n warnings.warn(\n f\"No sheet named {dataproperties_sheet_name} found in {excelpath}. \"\n \"New data properties will not be added to the ontology.\"\n )\n dataproperties = None\n\n metadata = pd.read_excel(excelpath, sheet_name=metadata_sheet_name)\n return create_ontology_from_pandas(\n data=conceptdata,\n objectproperties=objectproperties,\n dataproperties=dataproperties,\n annotationproperties=annotationproperties,\n metadata=metadata,\n imports=imports,\n base_iri=base_iri,\n base_iri_from_metadata=base_iri_from_metadata,\n catalog=catalog,\n force=force,\n input_ontology=input_ontology,\n )\n
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.create_ontology_from_pandas","title":"create_ontology_from_pandas(data, objectproperties, annotationproperties, dataproperties, metadata, imports, base_iri='http://emmo.info/emmo/domain/onto#', base_iri_from_metadata=True, catalog=None, force=False, input_ontology=None)
","text":"Create an ontology from a pandas DataFrame.
Check 'create_ontology_from_excel' for complete documentation.
Source code inontopy/excelparser.py
def create_ontology_from_pandas( # pylint:disable=too-many-locals,too-many-branches,too-many-statements,too-many-arguments\n data: pd.DataFrame,\n objectproperties: pd.DataFrame,\n annotationproperties: pd.DataFrame,\n dataproperties: pd.DataFrame,\n metadata: pd.DataFrame,\n imports: pd.DataFrame,\n base_iri: str = \"http://emmo.info/emmo/domain/onto#\",\n base_iri_from_metadata: bool = True,\n catalog: dict = None,\n force: bool = False,\n input_ontology: Union[ontopy.ontology.Ontology, None] = None,\n) -> Tuple[ontopy.ontology.Ontology, dict]:\n \"\"\"\n Create an ontology from a pandas DataFrame.\n\n Check 'create_ontology_from_excel' for complete documentation.\n \"\"\"\n # Get ontology to which new concepts should be added\n if input_ontology:\n onto = input_ontology\n catalog = {}\n else: # Create new ontology\n onto, catalog = get_metadata_from_dataframe(\n metadata, base_iri, imports=imports\n )\n\n # Set given or default base_iri if base_iri_from_metadata is False.\n if not base_iri_from_metadata:\n onto.base_iri = base_iri\n # onto.sync_python_names()\n # prefLabel, label, and altLabel\n # are default label annotations\n onto.set_default_label_annotations()\n # Add object properties\n if objectproperties is not None:\n objectproperties = _clean_dataframe(objectproperties)\n (\n onto,\n objectproperties_with_errors,\n added_objprop_indices,\n ) = _add_entities(\n onto=onto,\n data=objectproperties,\n entitytype=owlready2.ObjectPropertyClass,\n force=force,\n )\n\n if annotationproperties is not None:\n annotationproperties = _clean_dataframe(annotationproperties)\n (\n onto,\n annotationproperties_with_errors,\n added_annotprop_indices,\n ) = _add_entities(\n onto=onto,\n data=annotationproperties,\n entitytype=owlready2.AnnotationPropertyClass,\n force=force,\n )\n\n if dataproperties is not None:\n dataproperties = _clean_dataframe(dataproperties)\n (\n onto,\n dataproperties_with_errors,\n added_dataprop_indices,\n ) = _add_entities(\n onto=onto,\n data=dataproperties,\n entitytype=owlready2.DataPropertyClass,\n force=force,\n )\n onto.sync_attributes(\n name_policy=\"uuid\", name_prefix=\"EMMO_\", class_docstring=\"elucidation\"\n )\n # Clean up data frame with new concepts\n data = _clean_dataframe(data)\n # Add entities\n onto, entities_with_errors, added_concept_indices = _add_entities(\n onto=onto, data=data, entitytype=owlready2.ThingClass, force=force\n )\n\n # Add entity properties in a second loop\n for index in added_concept_indices:\n row = data.loc[index]\n properties = row[\"Relations\"]\n if properties == \"nan\":\n properties = None\n if isinstance(properties, str):\n try:\n entity = onto.get_by_label(row[\"prefLabel\"].strip())\n except NoSuchLabelError:\n pass\n props = properties.split(\";\")\n for prop in props:\n try:\n entity.is_a.append(evaluate(onto, prop.strip()))\n except pyparsing.ParseException as exc:\n warnings.warn(\n # This is currently not tested\n f\"Error in Property assignment for: '{entity}'. \"\n f\"Property to be Evaluated: '{prop}'. \"\n f\"{exc}\"\n )\n entities_with_errors[\"errors_in_properties\"].append(\n entity.name\n )\n except NoSuchLabelError as exc:\n msg = (\n f\"Error in Property assignment for: {entity}. \"\n f\"Property to be Evaluated: {prop}. \"\n f\"{exc}\"\n )\n if force is True:\n warnings.warn(msg)\n entities_with_errors[\"errors_in_properties\"].append(\n entity.name\n )\n else:\n raise ExcelError(msg) from exc\n\n # Add range and domain for object properties\n if objectproperties is not None:\n onto, objectproperties_with_errors = _add_range_domain(\n onto=onto,\n properties=objectproperties,\n added_prop_indices=added_objprop_indices,\n properties_with_errors=objectproperties_with_errors,\n force=force,\n )\n for key, value in objectproperties_with_errors.items():\n entities_with_errors[\"obj_prop_\" + key] = value\n # Add range and domain for annotation properties\n if annotationproperties is not None:\n onto, annotationproperties_with_errors = _add_range_domain(\n onto=onto,\n properties=annotationproperties,\n added_prop_indices=added_annotprop_indices,\n properties_with_errors=annotationproperties_with_errors,\n force=force,\n )\n for key, value in annotationproperties_with_errors.items():\n entities_with_errors[\"annot_prop_\" + key] = value\n\n # Add range and domain for data properties\n if dataproperties is not None:\n onto, dataproperties_with_errors = _add_range_domain(\n onto=onto,\n properties=dataproperties,\n added_prop_indices=added_dataprop_indices,\n properties_with_errors=dataproperties_with_errors,\n force=force,\n )\n for key, value in dataproperties_with_errors.items():\n entities_with_errors[\"data_prop_\" + key] = value\n\n # Synchronise Python attributes to ontology\n onto.sync_attributes(\n name_policy=\"uuid\", name_prefix=\"EMMO_\", class_docstring=\"elucidation\"\n )\n onto.dir_label = False\n entities_with_errors = {\n key: set(value) for key, value in entities_with_errors.items()\n }\n return onto, catalog, entities_with_errors\n
"},{"location":"api_reference/ontopy/excelparser/#ontopy.excelparser.get_metadata_from_dataframe","title":"get_metadata_from_dataframe(metadata, base_iri, base_iri_from_metadata=True, imports=None, catalog=None)
","text":"Create ontology with metadata from pd.DataFrame
Source code inontopy/excelparser.py
def get_metadata_from_dataframe( # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n metadata: pd.DataFrame,\n base_iri: str,\n base_iri_from_metadata: bool = True,\n imports: pd.DataFrame = None,\n catalog: dict = None,\n) -> Tuple[ontopy.ontology.Ontology, dict]:\n \"\"\"Create ontology with metadata from pd.DataFrame\"\"\"\n\n # base_iri from metadata if it exists and base_iri_from_metadata\n if base_iri_from_metadata:\n try:\n base_iris = _parse_literal(metadata, \"Ontology IRI\", metadata=True)\n if len(base_iris) > 1:\n warnings.warn(\n \"More than one Ontology IRI given. The first was chosen.\"\n )\n base_iri = base_iris[0] + \"#\"\n except (TypeError, ValueError, AttributeError, IndexError):\n pass\n\n # Create new ontology\n onto = get_ontology(base_iri)\n\n # Add imported ontologies\n catalog = {} if catalog is None else catalog\n locations = set()\n for _, row in imports.iterrows():\n # for location in imports:\n location = row[\"Imported ontologies\"]\n if not pd.isna(location) and location not in locations:\n imported = onto.world.get_ontology(location).load()\n onto.imported_ontologies.append(imported)\n catalog[imported.base_iri.rstrip(\"#/\")] = location\n try:\n cat = read_catalog(location.rsplit(\"/\", 1)[0])\n catalog.update(cat)\n except ReadCatalogError:\n warnings.warn(f\"Catalog for {imported} not found.\")\n locations.add(location)\n # set defined prefix\n if not pd.isna(row[\"prefix\"]):\n # set prefix for all ontologies with same 'base_iri_root'\n if not pd.isna(row[\"base_iri_root\"]):\n onto.set_common_prefix(\n iri_base=row[\"base_iri_root\"], prefix=row[\"prefix\"]\n )\n # If base_root not given, set prefix only to top ontology\n else:\n imported.prefix = row[\"prefix\"]\n\n with onto:\n # Add title\n try:\n _add_literal(\n metadata,\n onto.metadata.title,\n \"Title\",\n metadata=True,\n only_one=True,\n )\n except AttributeError:\n pass\n\n # Add license\n try:\n _add_literal(\n metadata, onto.metadata.license, \"License\", metadata=True\n )\n except AttributeError:\n pass\n\n # Add authors/creators\n try:\n _add_literal(\n metadata, onto.metadata.creator, \"Author\", metadata=True\n )\n except AttributeError:\n pass\n\n # Add contributors\n try:\n _add_literal(\n metadata,\n onto.metadata.contributor,\n \"Contributor\",\n metadata=True,\n )\n except AttributeError:\n pass\n\n # Add versionInfo\n try:\n _add_literal(\n metadata,\n onto.metadata.versionInfo,\n \"Ontology version Info\",\n metadata=True,\n only_one=True,\n )\n except AttributeError:\n pass\n return onto, catalog\n
"},{"location":"api_reference/ontopy/graph/","title":"graph","text":"A module for visualising ontologies using graphviz.
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph","title":" OntoGraph
","text":"Class for visualising an ontology.
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph--parameters","title":"Parameters","text":"ontology : ontopy.Ontology instance Ontology to visualize. root : None | graph.ALL | string | owlready2.ThingClass instance Name or owlready2 entity of root node to plot subgraph below. If root
is graph.ALL
, all classes will be included in the subgraph. leaves : None | sequence A sequence of leaf node names for generating sub-graphs. entities : None | sequence A sequence of entities to add to the graph. relations : \"all\" | str | None | sequence Sequence of relations to visualise. If \"all\", means to include all relations. style : None | dict | \"default\" A dict mapping the name of the different graphical elements to dicts of dot graph attributes. Supported graphical elements include: - graphtype : \"Digraph\" | \"Graph\" - graph : graph attributes (G) - class : nodes for classes (N) - root : additional attributes for root nodes (N) - leaf : additional attributes for leaf nodes (N) - defined_class : nodes for defined classes (N) - class_construct : nodes for class constructs (N) - individual : nodes for invididuals (N) - object_property : nodes for object properties (N) - data_property : nodes for data properties (N) - annotation_property : nodes for annotation properties (N) - added_node : nodes added because addnodes
is true (N) - isA : edges for isA relations (E) - not : edges for not class constructs (E) - equivalent_to : edges for equivalent_to relations (E) - disjoint_with : edges for disjoint_with relations (E) - inverse_of : edges for inverse_of relations (E) - default_relation : default edges relations and restrictions (E) - relations : dict of styles for different relations (E) - inverse : default edges for inverse relations (E) - default_dataprop : default edges for data properties (E) - nodes : attribute for individual nodes (N) - edges : attribute for individual edges (E) If style is None or \"default\", the default style is used. See https://www.graphviz.org/doc/info/attrs.html edgelabels : None | bool | dict Whether to add labels to the edges of the generated graph. It is also possible to provide a dict mapping the full labels (with cardinality stripped off for restrictions) to some abbreviations. addnodes : bool Whether to add missing target nodes in relations. addconstructs : bool Whether to add nodes representing class constructs. included_namespaces : sequence In combination with root
, only include classes with one of the listed namespaces. If empty (the default), nothing is excluded. included_ontologies : sequence In combination with root
, only include classes defined in one of the listed ontologies. If empty (default), nothing is excluded. parents : int Include parents
levels of parents. excluded_nodes : None | sequence Sequence of labels of nodes to exclude. graph : None | pydot.Dot instance Graphviz Digraph object to plot into. If None, a new graph object is created using the keyword arguments. imported : bool Whether to include imported classes if entities
is None. kwargs : Passed to graphviz.Digraph.
ontopy/graph.py
class OntoGraph: # pylint: disable=too-many-instance-attributes\n \"\"\"Class for visualising an ontology.\n\n Parameters\n ----------\n ontology : ontopy.Ontology instance\n Ontology to visualize.\n root : None | graph.ALL | string | owlready2.ThingClass instance\n Name or owlready2 entity of root node to plot subgraph\n below. If `root` is `graph.ALL`, all classes will be included\n in the subgraph.\n leaves : None | sequence\n A sequence of leaf node names for generating sub-graphs.\n entities : None | sequence\n A sequence of entities to add to the graph.\n relations : \"all\" | str | None | sequence\n Sequence of relations to visualise. If \"all\", means to include\n all relations.\n style : None | dict | \"default\"\n A dict mapping the name of the different graphical elements\n to dicts of dot graph attributes. Supported graphical elements\n include:\n - graphtype : \"Digraph\" | \"Graph\"\n - graph : graph attributes (G)\n - class : nodes for classes (N)\n - root : additional attributes for root nodes (N)\n - leaf : additional attributes for leaf nodes (N)\n - defined_class : nodes for defined classes (N)\n - class_construct : nodes for class constructs (N)\n - individual : nodes for invididuals (N)\n - object_property : nodes for object properties (N)\n - data_property : nodes for data properties (N)\n - annotation_property : nodes for annotation properties (N)\n - added_node : nodes added because `addnodes` is true (N)\n - isA : edges for isA relations (E)\n - not : edges for not class constructs (E)\n - equivalent_to : edges for equivalent_to relations (E)\n - disjoint_with : edges for disjoint_with relations (E)\n - inverse_of : edges for inverse_of relations (E)\n - default_relation : default edges relations and restrictions (E)\n - relations : dict of styles for different relations (E)\n - inverse : default edges for inverse relations (E)\n - default_dataprop : default edges for data properties (E)\n - nodes : attribute for individual nodes (N)\n - edges : attribute for individual edges (E)\n If style is None or \"default\", the default style is used.\n See https://www.graphviz.org/doc/info/attrs.html\n edgelabels : None | bool | dict\n Whether to add labels to the edges of the generated graph.\n It is also possible to provide a dict mapping the\n full labels (with cardinality stripped off for restrictions)\n to some abbreviations.\n addnodes : bool\n Whether to add missing target nodes in relations.\n addconstructs : bool\n Whether to add nodes representing class constructs.\n included_namespaces : sequence\n In combination with `root`, only include classes with one of\n the listed namespaces. If empty (the default), nothing is\n excluded.\n included_ontologies : sequence\n In combination with `root`, only include classes defined in\n one of the listed ontologies. If empty (default), nothing is\n excluded.\n parents : int\n Include `parents` levels of parents.\n excluded_nodes : None | sequence\n Sequence of labels of nodes to exclude.\n graph : None | pydot.Dot instance\n Graphviz Digraph object to plot into. If None, a new graph object\n is created using the keyword arguments.\n imported : bool\n Whether to include imported classes if `entities` is None.\n kwargs :\n Passed to graphviz.Digraph.\n \"\"\"\n\n def __init__( # pylint: disable=too-many-arguments,too-many-locals\n self,\n ontology,\n root=None,\n leaves=None,\n entities=None,\n relations=\"isA\",\n style=None,\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n included_namespaces=(),\n included_ontologies=(),\n parents=0,\n excluded_nodes=None,\n graph=None,\n imported=False,\n **kwargs,\n ):\n if style is None or style == \"default\":\n style = _default_style\n\n if graph is None:\n graphtype = style.get(\"graphtype\", \"Digraph\")\n dotcls = getattr(graphviz, graphtype)\n graph_attr = kwargs.pop(\"graph_attr\", {})\n for key, value in style.get(\"graph\", {}).items():\n graph_attr.setdefault(key, value)\n self.dot = dotcls(graph_attr=graph_attr, **kwargs)\n self.nodes = set()\n self.edges = set()\n else:\n if ontology != graph.ontology:\n raise ValueError(\n \"the same ontology must be used when extending a graph\"\n )\n self.dot = graph.dot.copy()\n self.nodes = graph.nodes.copy()\n self.edges = graph.edges.copy()\n\n self.ontology = ontology\n self.relations = set(\n [relations] if isinstance(relations, str) else relations\n )\n self.style = style\n self.edgelabels = edgelabels\n self.addnodes = addnodes\n self.addconstructs = addconstructs\n self.excluded_nodes = set(excluded_nodes) if excluded_nodes else set()\n self.imported = imported\n\n if root == ALL:\n self.add_entities(\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n )\n elif root:\n self.add_branch(\n root,\n leaves,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n if parents:\n self.add_parents(\n root,\n levels=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n )\n\n if entities:\n self.add_entities(\n entities=entities,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n )\n\n def add_entities( # pylint: disable=too-many-arguments\n self,\n entities=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n nodeattrs=None,\n **attrs,\n ):\n \"\"\"Adds a sequence of entities to the graph. If `entities` is None,\n all classes are added to the graph.\n\n `nodeattrs` is a dict mapping node names to are attributes for\n dedicated nodes.\n \"\"\"\n if entities is None:\n entities = self.ontology.classes(imported=self.imported)\n self.add_nodes(entities, nodeattrs=nodeattrs, **attrs)\n self.add_edges(\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n\n def add_branch( # pylint: disable=too-many-arguments,too-many-locals\n self,\n root,\n leaves=None,\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n included_namespaces=(),\n included_ontologies=(),\n include_parents=\"closest\",\n **attrs,\n ):\n \"\"\"Adds branch under `root` ending at any entity included in the\n sequence `leaves`. If `include_leaves` is true, leaf classes are\n also included.\"\"\"\n if leaves is None:\n leaves = ()\n classes = self.ontology.get_branch(\n root=root,\n leaves=leaves,\n include_leaves=include_leaves,\n strict_leaves=strict_leaves,\n exclude=exclude,\n )\n\n classes = filter_classes(\n classes,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n nodeattrs = {}\n nodeattrs[get_label(root)] = self.style.get(\"root\", {})\n for leaf in leaves:\n nodeattrs[get_label(leaf)] = self.style.get(\"leaf\", {})\n\n self.add_entities(\n entities=classes,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n closest_ancestors = False\n ancestor_generations = None\n if include_parents == \"closest\":\n closest_ancestors = True\n elif isinstance(include_parents, int):\n ancestor_generations = include_parents\n parents = self.ontology.get_ancestors(\n classes,\n closest=closest_ancestors,\n generations=ancestor_generations,\n strict=True,\n )\n if parents:\n for parent in parents:\n nodeattrs[get_label(parent)] = self.style.get(\"parent_node\", {})\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n\n def add_parents( # pylint: disable=too-many-arguments\n self,\n name,\n levels=1,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n **attrs,\n ):\n \"\"\"Add `levels` levels of strict parents of entity `name`.\"\"\"\n\n def addparents(entity, nodes, parents):\n if nodes > 0:\n for parent in entity.get_parents(strict=True):\n parents.add(parent)\n addparents(parent, nodes - 1, parents)\n\n entity = self.ontology[name] if isinstance(name, str) else name\n parents = set()\n addparents(entity, levels, parents)\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n\n def add_node(self, name, nodeattrs=None, **attrs):\n \"\"\"Add node with given name. `attrs` are graphviz node attributes.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes.union(self.excluded_nodes):\n kwargs = self.get_node_attrs(\n entity, nodeattrs=nodeattrs, attrs=attrs\n )\n if hasattr(entity, \"iri\"):\n kwargs.setdefault(\"URL\", entity.iri)\n self.dot.node(label, label=label, **kwargs)\n self.nodes.add(label)\n\n def add_nodes(self, names, nodeattrs, **attrs):\n \"\"\"Add nodes with given names. `attrs` are graphviz node attributes.\"\"\"\n for name in names:\n self.add_node(name, nodeattrs=nodeattrs, **attrs)\n\n def add_edge(self, subject, predicate, obj, edgelabel=None, **attrs):\n \"\"\"Add edge corresponding for ``(subject, predicate, object)``\n triplet.\"\"\"\n subject = subject if isinstance(subject, str) else get_label(subject)\n predicate = (\n predicate if isinstance(predicate, str) else get_label(predicate)\n )\n obj = obj if isinstance(obj, str) else get_label(obj)\n if subject in self.excluded_nodes or obj in self.excluded_nodes:\n return\n if not isinstance(subject, str) or not isinstance(obj, str):\n raise TypeError(\"`subject` and `object` must be strings\")\n if subject not in self.nodes:\n raise RuntimeError(f'`subject` \"{subject}\" must have been added')\n if obj not in self.nodes:\n raise RuntimeError(f'`object` \"{obj}\" must have been added')\n key = (subject, predicate, obj)\n if key not in self.edges:\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n if (edgelabel is None) and (\n (predicate in rels) or (predicate == \"isA\")\n ):\n edgelabel = self.edgelabels\n label = None\n if edgelabel is None:\n tokens = predicate.split()\n if len(tokens) == 2 and tokens[1] in (\"some\", \"only\"):\n label = f\"{tokens[0]} {tokens[1]}\"\n elif len(tokens) == 3 and tokens[1] in (\n \"exactly\",\n \"min\",\n \"max\",\n ):\n label = f\"{tokens[0]} {tokens[1]} {tokens[2]}\"\n elif isinstance(edgelabel, str):\n label = edgelabel\n elif isinstance(edgelabel, dict):\n label = edgelabel.get(predicate, predicate)\n elif edgelabel:\n label = predicate\n kwargs = self.get_edge_attrs(predicate, attrs=attrs)\n self.dot.edge(subject, obj, label=label, **kwargs)\n self.edges.add(key)\n\n def add_source_edges( # pylint: disable=too-many-arguments,too-many-branches\n self,\n source,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n ):\n \"\"\"Adds all relations originating from entity `source` who's type\n are listed in `relations`.\"\"\"\n if relations is None:\n relations = self.relations\n elif isinstance(relations, str):\n relations = set([relations])\n else:\n relations = set(relations)\n\n edgelabels = self.edgelabels if edgelabels is None else edgelabels\n addconstructs = (\n self.addconstructs if addconstructs is None else addconstructs\n )\n\n entity = self.ontology[source] if isinstance(source, str) else source\n label = get_label(entity)\n for relation in entity.is_a:\n # isA\n if isinstance(\n relation, (owlready2.ThingClass, owlready2.ObjectPropertyClass)\n ):\n if \"all\" in relations or \"isA\" in relations:\n rlabel = get_label(relation)\n # FIXME - we actually want to include individuals...\n if isinstance(entity, owlready2.Thing):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n self.add_edge(\n subject=label,\n predicate=\"isA\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n\n # restriction\n elif isinstance(relation, owlready2.Restriction):\n rname = get_label(relation.property)\n if \"all\" in relations or rname in relations:\n rlabel = f\"{rname} {typenames[relation.type]}\"\n if isinstance(relation.value, owlready2.ThingClass):\n obj = get_label(relation.value)\n if not self.add_missing_node(relation.value, addnodes):\n continue\n elif (\n isinstance(relation.value, owlready2.ClassConstruct)\n and self.addconstructs\n ):\n obj = self.add_class_construct(relation.value)\n else:\n continue\n pred = asstring(\n relation, exclude_object=True, ontology=self.ontology\n )\n self.add_edge(\n label, pred, obj, edgelabel=edgelabels, **attrs\n )\n\n # inverse\n if isinstance(relation, owlready2.Inverse):\n if \"all\" in relations or \"inverse\" in relations:\n rlabel = get_label(relation)\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n self.add_edge(\n subject=label,\n predicate=\"inverse\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n\n def add_edges( # pylint: disable=too-many-arguments\n self,\n sources=None,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n ):\n \"\"\"Adds all relations originating from entities `sources` who's type\n are listed in `relations`. If `sources` is None, edges are added\n between all current nodes.\"\"\"\n if sources is None:\n sources = self.nodes\n for source in sources.copy():\n self.add_source_edges(\n source,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n\n def add_missing_node(self, name, addnodes=None):\n \"\"\"Checks if `name` corresponds to a missing node and add it if\n `addnodes` is true.\n\n Returns true if the node exists or is added, false otherwise.\"\"\"\n addnodes = self.addnodes if addnodes is None else addnodes\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes:\n if addnodes:\n self.add_node(entity, **self.style.get(\"added_node\", {}))\n else:\n return False\n return True\n\n def add_class_construct(self, construct):\n \"\"\"Adds class construct and return its label.\"\"\"\n self.add_node(construct, **self.style.get(\"class_construct\", {}))\n label = get_label(construct)\n if isinstance(construct, owlready2.Or):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(get_label(cls), \"isA\", label)\n elif isinstance(construct, owlready2.And):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(label, \"isA\", get_label(cls))\n elif isinstance(construct, owlready2.Not):\n clslabel = get_label(construct.Class)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(construct.Class)\n if clslabel in self.nodes:\n self.add_edge(clslabel, \"not\", label)\n # Neither and nor inverse constructs are\n return label\n\n def get_node_attrs(self, name, nodeattrs, attrs):\n \"\"\"Returns attributes for node or edge `name`. `attrs` overrides\n the default style.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n # class\n if isinstance(entity, owlready2.ThingClass):\n if entity.is_defined:\n kwargs = self.style.get(\"defined_class\", {})\n else:\n kwargs = self.style.get(\"class\", {})\n # class construct\n elif isinstance(entity, owlready2.ClassConstruct):\n kwargs = self.style.get(\"class_construct\", {})\n # individual\n elif isinstance(entity, owlready2.Thing):\n kwargs = self.style.get(\"individual\", {})\n # object property\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n kwargs = self.style.get(\"object_property\", {})\n # data property\n elif isinstance(entity, owlready2.DataPropertyClass):\n kwargs = self.style.get(\"data_property\", {})\n # annotation property\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n kwargs = self.style.get(\"annotation_property\", {})\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs = kwargs.copy()\n kwargs.update(self.style.get(\"nodes\", {}).get(label, {}))\n if nodeattrs:\n kwargs.update(nodeattrs.get(label, {}))\n kwargs.update(attrs)\n return kwargs\n\n def _relation_styles(\n self, entity: ThingClass, relations: dict, rels: set\n ) -> dict:\n \"\"\"Helper function that returns the styles of the relations\n to be used.\n\n Parameters:\n entity: the entity of the parent relation\n relations: relations with default styles\n rels: relations to be considered that have default styles,\n either for the prefLabel or one of the altLabels\n \"\"\"\n for relation in entity.mro():\n if relation in rels:\n if str(get_label(relation)) in relations:\n rattrs = relations[str(get_label(relation))]\n else:\n for alt_label in relation.get_annotations()[\"altLabel\"]:\n rattrs = relations[str(alt_label)]\n\n break\n else:\n warnings.warn(\n f\"Style not defined for relation {get_label(entity)}. \"\n \"Resorting to default style.\"\n )\n rattrs = self.style.get(\"default_relation\", {})\n return rattrs\n\n def get_edge_attrs(self, predicate: str, attrs: dict) -> dict:\n \"\"\"Returns attributes for node or edge `predicate`. `attrs` overrides\n the default style.\n\n Parameters:\n predicate: predicate to get attributes for\n attrs: desired attributes to override default\n \"\"\"\n # given type\n types = (\"isA\", \"equivalent_to\", \"disjoint_with\", \"inverse_of\")\n if predicate in types:\n kwargs = self.style.get(predicate, {}).copy()\n else:\n kwargs = {}\n name = predicate.split(None, 1)[0]\n match = re.match(r\"Inverse\\((.*)\\)\", name)\n if match:\n (name,) = match.groups()\n attrs = attrs.copy()\n for key, value in self.style.get(\"inverse\", {}).items():\n attrs.setdefault(key, value)\n if not isinstance(name, str) or name in self.ontology:\n entity = self.ontology[name] if isinstance(name, str) else name\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n rattrs = self._relation_styles(entity, relations, rels)\n\n # object property\n if isinstance(\n entity,\n (owlready2.ObjectPropertyClass, owlready2.ObjectProperty),\n ):\n kwargs = self.style.get(\"default_relation\", {}).copy()\n kwargs.update(rattrs)\n # data property\n elif isinstance(\n entity,\n (owlready2.DataPropertyClass, owlready2.DataProperty),\n ):\n kwargs = self.style.get(\"default_dataprop\", {}).copy()\n kwargs.update(rattrs)\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs.update(self.style.get(\"edges\", {}).get(predicate, {}))\n kwargs.update(attrs)\n return kwargs\n\n def add_legend(self, relations=None):\n \"\"\"Adds legend for specified relations to the graph.\n\n If `relations` is \"all\", the legend will contain all relations\n that are defined in the style. By default the legend will\n only contain relations that are currently included in the\n graph.\n\n Hence, you usually want to call add_legend() as the last method\n before saving or displaying.\n\n Relations with defined style will be bold in legend.\n Relations that have inherited style from parent relation\n will not be bold.\n \"\"\"\n rels = self.style.get(\"relations\", {})\n if relations is None:\n relations = self.get_relations(sort=True)\n elif relations == \"all\":\n relations = [\"isA\"] + list(rels.keys()) + [\"inverse\"]\n elif isinstance(relations, str):\n relations = relations.split(\",\")\n\n nrelations = len(relations)\n if nrelations == 0:\n return\n\n table = (\n '<<table border=\"0\" cellpadding=\"2\" cellspacing=\"0\" cellborder=\"0\">'\n )\n label1 = [table]\n label2 = [table]\n for index, relation in enumerate(relations):\n if (relation in rels) or (relation == \"isA\"):\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\"><b>{relation}</b></td></tr>'\n )\n else:\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\">{relation}</td></tr>'\n )\n label2.append(f'<tr><td port=\"i{index}\"> </td></tr>')\n label1.append(\"</table>>\")\n label2.append(\"</table>>\")\n self.dot.node(\"key1\", label=\"\\n\".join(label1), shape=\"plaintext\")\n self.dot.node(\"key2\", label=\"\\n\".join(label2), shape=\"plaintext\")\n\n rankdir = self.dot.graph_attr.get(\"rankdir\", \"TB\")\n constraint = \"false\" if rankdir in (\"TB\", \"BT\") else \"true\"\n inv = rankdir in (\"BT\",)\n\n for index in range(nrelations):\n relation = (\n relations[nrelations - 1 - index] if inv else relations[index]\n )\n if relation == \"inverse\":\n kwargs = self.style.get(\"inverse\", {}).copy()\n else:\n kwargs = self.get_edge_attrs(relation, {}).copy()\n kwargs[\"constraint\"] = constraint\n with self.dot.subgraph(name=f\"sub{index}\") as subgraph:\n subgraph.attr(rank=\"same\")\n if rankdir in (\"BT\", \"LR\"):\n self.dot.edge(\n f\"key1:i{index}:e\", f\"key2:i{index}:w\", **kwargs\n )\n else:\n self.dot.edge(\n f\"key2:i{index}:w\", f\"key1:i{index}:e\", **kwargs\n )\n\n def get_relations(self, sort=True):\n \"\"\"Returns a set of relations in current graph. If `sort` is true,\n a sorted list is returned.\"\"\"\n relations = set()\n for _, predicate, _ in self.edges:\n if predicate.startswith(\"Inverse\"):\n relations.add(\"inverse\")\n match = re.match(r\"Inverse\\((.+)\\)\", predicate)\n if match is None:\n raise ValueError(\n \"Could unexpectedly not find the inverse relation \"\n f\"just added in: {predicate}\"\n )\n relations.add(match.groups()[0])\n else:\n relations.add(predicate.split(None, 1)[0])\n\n # Sort, but place 'isA' first and 'inverse' last\n if sort:\n start, end = [], []\n if \"isA\" in relations:\n relations.remove(\"isA\")\n start.append(\"isA\")\n if \"inverse\" in relations:\n relations.remove(\"inverse\")\n end.append(\"inverse\")\n relations = start + sorted(relations) + end\n\n return relations\n\n def save(self, filename, fmt=None, **kwargs):\n \"\"\"Saves graph to `filename`. If format is not given, it is\n inferred from `filename`.\"\"\"\n base = os.path.splitext(filename)[0]\n fmt = get_format(filename, default=\"svg\", fmt=fmt)\n kwargs.setdefault(\"cleanup\", True)\n if fmt in (\"graphviz\", \"gv\"):\n if \"dictionary\" in kwargs:\n self.dot.save(filename, dictionary=kwargs[\"dictionary\"])\n else:\n self.dot.save(filename)\n else:\n fmt = kwargs.pop(\"format\", fmt)\n self.dot.render(base, format=fmt, **kwargs)\n\n def view(self):\n \"\"\"Shows the graph in a viewer.\"\"\"\n self.dot.view(cleanup=True)\n\n def get_figsize(self):\n \"\"\"Returns the default figure size (width, height) in points.\"\"\"\n with tempfile.TemporaryDirectory() as tmpdir:\n tmpfile = os.path.join(tmpdir, \"graph.svg\")\n self.save(tmpfile)\n xml = ET.parse(tmpfile)\n svg = xml.getroot()\n width = svg.attrib[\"width\"]\n height = svg.attrib[\"height\"]\n if not width.endswith(\"pt\"):\n # ensure that units are in points\n raise ValueError(\n \"The width attribute should always be given in 'pt', \"\n f\"but it is: {width}\"\n )\n\n def asfloat(string):\n return float(re.match(r\"^[\\d.]+\", string).group())\n\n return asfloat(width), asfloat(height)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_branch","title":"add_branch(self, root, leaves=None, include_leaves=True, strict_leaves=False, exclude=None, relations='isA', edgelabels=None, addnodes=False, addconstructs=False, included_namespaces=(), included_ontologies=(), include_parents='closest', **attrs)
","text":"Adds branch under root
ending at any entity included in the sequence leaves
. If include_leaves
is true, leaf classes are also included.
ontopy/graph.py
def add_branch( # pylint: disable=too-many-arguments,too-many-locals\n self,\n root,\n leaves=None,\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n included_namespaces=(),\n included_ontologies=(),\n include_parents=\"closest\",\n **attrs,\n):\n \"\"\"Adds branch under `root` ending at any entity included in the\n sequence `leaves`. If `include_leaves` is true, leaf classes are\n also included.\"\"\"\n if leaves is None:\n leaves = ()\n classes = self.ontology.get_branch(\n root=root,\n leaves=leaves,\n include_leaves=include_leaves,\n strict_leaves=strict_leaves,\n exclude=exclude,\n )\n\n classes = filter_classes(\n classes,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n nodeattrs = {}\n nodeattrs[get_label(root)] = self.style.get(\"root\", {})\n for leaf in leaves:\n nodeattrs[get_label(leaf)] = self.style.get(\"leaf\", {})\n\n self.add_entities(\n entities=classes,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n closest_ancestors = False\n ancestor_generations = None\n if include_parents == \"closest\":\n closest_ancestors = True\n elif isinstance(include_parents, int):\n ancestor_generations = include_parents\n parents = self.ontology.get_ancestors(\n classes,\n closest=closest_ancestors,\n generations=ancestor_generations,\n strict=True,\n )\n if parents:\n for parent in parents:\n nodeattrs[get_label(parent)] = self.style.get(\"parent_node\", {})\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n nodeattrs=nodeattrs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_class_construct","title":"add_class_construct(self, construct)
","text":"Adds class construct and return its label.
Source code inontopy/graph.py
def add_class_construct(self, construct):\n \"\"\"Adds class construct and return its label.\"\"\"\n self.add_node(construct, **self.style.get(\"class_construct\", {}))\n label = get_label(construct)\n if isinstance(construct, owlready2.Or):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(get_label(cls), \"isA\", label)\n elif isinstance(construct, owlready2.And):\n for cls in construct.Classes:\n clslabel = get_label(cls)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(cls)\n if clslabel in self.nodes:\n self.add_edge(label, \"isA\", get_label(cls))\n elif isinstance(construct, owlready2.Not):\n clslabel = get_label(construct.Class)\n if clslabel not in self.nodes and self.addnodes:\n self.add_node(construct.Class)\n if clslabel in self.nodes:\n self.add_edge(clslabel, \"not\", label)\n # Neither and nor inverse constructs are\n return label\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_edge","title":"add_edge(self, subject, predicate, obj, edgelabel=None, **attrs)
","text":"Add edge corresponding for (subject, predicate, object)
triplet.
ontopy/graph.py
def add_edge(self, subject, predicate, obj, edgelabel=None, **attrs):\n \"\"\"Add edge corresponding for ``(subject, predicate, object)``\n triplet.\"\"\"\n subject = subject if isinstance(subject, str) else get_label(subject)\n predicate = (\n predicate if isinstance(predicate, str) else get_label(predicate)\n )\n obj = obj if isinstance(obj, str) else get_label(obj)\n if subject in self.excluded_nodes or obj in self.excluded_nodes:\n return\n if not isinstance(subject, str) or not isinstance(obj, str):\n raise TypeError(\"`subject` and `object` must be strings\")\n if subject not in self.nodes:\n raise RuntimeError(f'`subject` \"{subject}\" must have been added')\n if obj not in self.nodes:\n raise RuntimeError(f'`object` \"{obj}\" must have been added')\n key = (subject, predicate, obj)\n if key not in self.edges:\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n if (edgelabel is None) and (\n (predicate in rels) or (predicate == \"isA\")\n ):\n edgelabel = self.edgelabels\n label = None\n if edgelabel is None:\n tokens = predicate.split()\n if len(tokens) == 2 and tokens[1] in (\"some\", \"only\"):\n label = f\"{tokens[0]} {tokens[1]}\"\n elif len(tokens) == 3 and tokens[1] in (\n \"exactly\",\n \"min\",\n \"max\",\n ):\n label = f\"{tokens[0]} {tokens[1]} {tokens[2]}\"\n elif isinstance(edgelabel, str):\n label = edgelabel\n elif isinstance(edgelabel, dict):\n label = edgelabel.get(predicate, predicate)\n elif edgelabel:\n label = predicate\n kwargs = self.get_edge_attrs(predicate, attrs=attrs)\n self.dot.edge(subject, obj, label=label, **kwargs)\n self.edges.add(key)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_edges","title":"add_edges(self, sources=None, relations=None, edgelabels=None, addnodes=None, addconstructs=None, **attrs)
","text":"Adds all relations originating from entities sources
who's type are listed in relations
. If sources
is None, edges are added between all current nodes.
ontopy/graph.py
def add_edges( # pylint: disable=too-many-arguments\n self,\n sources=None,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n):\n \"\"\"Adds all relations originating from entities `sources` who's type\n are listed in `relations`. If `sources` is None, edges are added\n between all current nodes.\"\"\"\n if sources is None:\n sources = self.nodes\n for source in sources.copy():\n self.add_source_edges(\n source,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_entities","title":"add_entities(self, entities=None, relations='isA', edgelabels=None, addnodes=False, addconstructs=False, nodeattrs=None, **attrs)
","text":"Adds a sequence of entities to the graph. If entities
is None, all classes are added to the graph.
nodeattrs
is a dict mapping node names to are attributes for dedicated nodes.
ontopy/graph.py
def add_entities( # pylint: disable=too-many-arguments\n self,\n entities=None,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n nodeattrs=None,\n **attrs,\n):\n \"\"\"Adds a sequence of entities to the graph. If `entities` is None,\n all classes are added to the graph.\n\n `nodeattrs` is a dict mapping node names to are attributes for\n dedicated nodes.\n \"\"\"\n if entities is None:\n entities = self.ontology.classes(imported=self.imported)\n self.add_nodes(entities, nodeattrs=nodeattrs, **attrs)\n self.add_edges(\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_legend","title":"add_legend(self, relations=None)
","text":"Adds legend for specified relations to the graph.
If relations
is \"all\", the legend will contain all relations that are defined in the style. By default the legend will only contain relations that are currently included in the graph.
Hence, you usually want to call add_legend() as the last method before saving or displaying.
Relations with defined style will be bold in legend. Relations that have inherited style from parent relation will not be bold.
Source code inontopy/graph.py
def add_legend(self, relations=None):\n \"\"\"Adds legend for specified relations to the graph.\n\n If `relations` is \"all\", the legend will contain all relations\n that are defined in the style. By default the legend will\n only contain relations that are currently included in the\n graph.\n\n Hence, you usually want to call add_legend() as the last method\n before saving or displaying.\n\n Relations with defined style will be bold in legend.\n Relations that have inherited style from parent relation\n will not be bold.\n \"\"\"\n rels = self.style.get(\"relations\", {})\n if relations is None:\n relations = self.get_relations(sort=True)\n elif relations == \"all\":\n relations = [\"isA\"] + list(rels.keys()) + [\"inverse\"]\n elif isinstance(relations, str):\n relations = relations.split(\",\")\n\n nrelations = len(relations)\n if nrelations == 0:\n return\n\n table = (\n '<<table border=\"0\" cellpadding=\"2\" cellspacing=\"0\" cellborder=\"0\">'\n )\n label1 = [table]\n label2 = [table]\n for index, relation in enumerate(relations):\n if (relation in rels) or (relation == \"isA\"):\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\"><b>{relation}</b></td></tr>'\n )\n else:\n label1.append(\n f'<tr><td align=\"right\" '\n f'port=\"i{index}\">{relation}</td></tr>'\n )\n label2.append(f'<tr><td port=\"i{index}\"> </td></tr>')\n label1.append(\"</table>>\")\n label2.append(\"</table>>\")\n self.dot.node(\"key1\", label=\"\\n\".join(label1), shape=\"plaintext\")\n self.dot.node(\"key2\", label=\"\\n\".join(label2), shape=\"plaintext\")\n\n rankdir = self.dot.graph_attr.get(\"rankdir\", \"TB\")\n constraint = \"false\" if rankdir in (\"TB\", \"BT\") else \"true\"\n inv = rankdir in (\"BT\",)\n\n for index in range(nrelations):\n relation = (\n relations[nrelations - 1 - index] if inv else relations[index]\n )\n if relation == \"inverse\":\n kwargs = self.style.get(\"inverse\", {}).copy()\n else:\n kwargs = self.get_edge_attrs(relation, {}).copy()\n kwargs[\"constraint\"] = constraint\n with self.dot.subgraph(name=f\"sub{index}\") as subgraph:\n subgraph.attr(rank=\"same\")\n if rankdir in (\"BT\", \"LR\"):\n self.dot.edge(\n f\"key1:i{index}:e\", f\"key2:i{index}:w\", **kwargs\n )\n else:\n self.dot.edge(\n f\"key2:i{index}:w\", f\"key1:i{index}:e\", **kwargs\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_missing_node","title":"add_missing_node(self, name, addnodes=None)
","text":"Checks if name
corresponds to a missing node and add it if addnodes
is true.
Returns true if the node exists or is added, false otherwise.
Source code inontopy/graph.py
def add_missing_node(self, name, addnodes=None):\n \"\"\"Checks if `name` corresponds to a missing node and add it if\n `addnodes` is true.\n\n Returns true if the node exists or is added, false otherwise.\"\"\"\n addnodes = self.addnodes if addnodes is None else addnodes\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes:\n if addnodes:\n self.add_node(entity, **self.style.get(\"added_node\", {}))\n else:\n return False\n return True\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_node","title":"add_node(self, name, nodeattrs=None, **attrs)
","text":"Add node with given name. attrs
are graphviz node attributes.
ontopy/graph.py
def add_node(self, name, nodeattrs=None, **attrs):\n \"\"\"Add node with given name. `attrs` are graphviz node attributes.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n if label not in self.nodes.union(self.excluded_nodes):\n kwargs = self.get_node_attrs(\n entity, nodeattrs=nodeattrs, attrs=attrs\n )\n if hasattr(entity, \"iri\"):\n kwargs.setdefault(\"URL\", entity.iri)\n self.dot.node(label, label=label, **kwargs)\n self.nodes.add(label)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_nodes","title":"add_nodes(self, names, nodeattrs, **attrs)
","text":"Add nodes with given names. attrs
are graphviz node attributes.
ontopy/graph.py
def add_nodes(self, names, nodeattrs, **attrs):\n \"\"\"Add nodes with given names. `attrs` are graphviz node attributes.\"\"\"\n for name in names:\n self.add_node(name, nodeattrs=nodeattrs, **attrs)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_parents","title":"add_parents(self, name, levels=1, relations='isA', edgelabels=None, addnodes=False, addconstructs=False, **attrs)
","text":"Add levels
levels of strict parents of entity name
.
ontopy/graph.py
def add_parents( # pylint: disable=too-many-arguments\n self,\n name,\n levels=1,\n relations=\"isA\",\n edgelabels=None,\n addnodes=False,\n addconstructs=False,\n **attrs,\n):\n \"\"\"Add `levels` levels of strict parents of entity `name`.\"\"\"\n\n def addparents(entity, nodes, parents):\n if nodes > 0:\n for parent in entity.get_parents(strict=True):\n parents.add(parent)\n addparents(parent, nodes - 1, parents)\n\n entity = self.ontology[name] if isinstance(name, str) else name\n parents = set()\n addparents(entity, levels, parents)\n self.add_entities(\n entities=parents,\n relations=relations,\n edgelabels=edgelabels,\n addnodes=addnodes,\n addconstructs=addconstructs,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.add_source_edges","title":"add_source_edges(self, source, relations=None, edgelabels=None, addnodes=None, addconstructs=None, **attrs)
","text":"Adds all relations originating from entity source
who's type are listed in relations
.
ontopy/graph.py
def add_source_edges( # pylint: disable=too-many-arguments,too-many-branches\n self,\n source,\n relations=None,\n edgelabels=None,\n addnodes=None,\n addconstructs=None,\n **attrs,\n):\n \"\"\"Adds all relations originating from entity `source` who's type\n are listed in `relations`.\"\"\"\n if relations is None:\n relations = self.relations\n elif isinstance(relations, str):\n relations = set([relations])\n else:\n relations = set(relations)\n\n edgelabels = self.edgelabels if edgelabels is None else edgelabels\n addconstructs = (\n self.addconstructs if addconstructs is None else addconstructs\n )\n\n entity = self.ontology[source] if isinstance(source, str) else source\n label = get_label(entity)\n for relation in entity.is_a:\n # isA\n if isinstance(\n relation, (owlready2.ThingClass, owlready2.ObjectPropertyClass)\n ):\n if \"all\" in relations or \"isA\" in relations:\n rlabel = get_label(relation)\n # FIXME - we actually want to include individuals...\n if isinstance(entity, owlready2.Thing):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n self.add_edge(\n subject=label,\n predicate=\"isA\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n\n # restriction\n elif isinstance(relation, owlready2.Restriction):\n rname = get_label(relation.property)\n if \"all\" in relations or rname in relations:\n rlabel = f\"{rname} {typenames[relation.type]}\"\n if isinstance(relation.value, owlready2.ThingClass):\n obj = get_label(relation.value)\n if not self.add_missing_node(relation.value, addnodes):\n continue\n elif (\n isinstance(relation.value, owlready2.ClassConstruct)\n and self.addconstructs\n ):\n obj = self.add_class_construct(relation.value)\n else:\n continue\n pred = asstring(\n relation, exclude_object=True, ontology=self.ontology\n )\n self.add_edge(\n label, pred, obj, edgelabel=edgelabels, **attrs\n )\n\n # inverse\n if isinstance(relation, owlready2.Inverse):\n if \"all\" in relations or \"inverse\" in relations:\n rlabel = get_label(relation)\n if not self.add_missing_node(relation, addnodes=addnodes):\n continue\n if relation not in entity.get_parents(strict=True):\n continue\n self.add_edge(\n subject=label,\n predicate=\"inverse\",\n obj=rlabel,\n edgelabel=edgelabels,\n **attrs,\n )\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_edge_attrs","title":"get_edge_attrs(self, predicate, attrs)
","text":"Returns attributes for node or edge predicate
. attrs
overrides the default style.
Parameters:
Name Type Description Defaultpredicate
str
predicate to get attributes for
requiredattrs
dict
desired attributes to override default
required Source code inontopy/graph.py
def get_edge_attrs(self, predicate: str, attrs: dict) -> dict:\n \"\"\"Returns attributes for node or edge `predicate`. `attrs` overrides\n the default style.\n\n Parameters:\n predicate: predicate to get attributes for\n attrs: desired attributes to override default\n \"\"\"\n # given type\n types = (\"isA\", \"equivalent_to\", \"disjoint_with\", \"inverse_of\")\n if predicate in types:\n kwargs = self.style.get(predicate, {}).copy()\n else:\n kwargs = {}\n name = predicate.split(None, 1)[0]\n match = re.match(r\"Inverse\\((.*)\\)\", name)\n if match:\n (name,) = match.groups()\n attrs = attrs.copy()\n for key, value in self.style.get(\"inverse\", {}).items():\n attrs.setdefault(key, value)\n if not isinstance(name, str) or name in self.ontology:\n entity = self.ontology[name] if isinstance(name, str) else name\n relations = self.style.get(\"relations\", {})\n rels = set(\n self.ontology[_] for _ in relations if _ in self.ontology\n )\n rattrs = self._relation_styles(entity, relations, rels)\n\n # object property\n if isinstance(\n entity,\n (owlready2.ObjectPropertyClass, owlready2.ObjectProperty),\n ):\n kwargs = self.style.get(\"default_relation\", {}).copy()\n kwargs.update(rattrs)\n # data property\n elif isinstance(\n entity,\n (owlready2.DataPropertyClass, owlready2.DataProperty),\n ):\n kwargs = self.style.get(\"default_dataprop\", {}).copy()\n kwargs.update(rattrs)\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs.update(self.style.get(\"edges\", {}).get(predicate, {}))\n kwargs.update(attrs)\n return kwargs\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_figsize","title":"get_figsize(self)
","text":"Returns the default figure size (width, height) in points.
Source code inontopy/graph.py
def get_figsize(self):\n \"\"\"Returns the default figure size (width, height) in points.\"\"\"\n with tempfile.TemporaryDirectory() as tmpdir:\n tmpfile = os.path.join(tmpdir, \"graph.svg\")\n self.save(tmpfile)\n xml = ET.parse(tmpfile)\n svg = xml.getroot()\n width = svg.attrib[\"width\"]\n height = svg.attrib[\"height\"]\n if not width.endswith(\"pt\"):\n # ensure that units are in points\n raise ValueError(\n \"The width attribute should always be given in 'pt', \"\n f\"but it is: {width}\"\n )\n\n def asfloat(string):\n return float(re.match(r\"^[\\d.]+\", string).group())\n\n return asfloat(width), asfloat(height)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_node_attrs","title":"get_node_attrs(self, name, nodeattrs, attrs)
","text":"Returns attributes for node or edge name
. attrs
overrides the default style.
ontopy/graph.py
def get_node_attrs(self, name, nodeattrs, attrs):\n \"\"\"Returns attributes for node or edge `name`. `attrs` overrides\n the default style.\"\"\"\n entity = self.ontology[name] if isinstance(name, str) else name\n label = get_label(entity)\n # class\n if isinstance(entity, owlready2.ThingClass):\n if entity.is_defined:\n kwargs = self.style.get(\"defined_class\", {})\n else:\n kwargs = self.style.get(\"class\", {})\n # class construct\n elif isinstance(entity, owlready2.ClassConstruct):\n kwargs = self.style.get(\"class_construct\", {})\n # individual\n elif isinstance(entity, owlready2.Thing):\n kwargs = self.style.get(\"individual\", {})\n # object property\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n kwargs = self.style.get(\"object_property\", {})\n # data property\n elif isinstance(entity, owlready2.DataPropertyClass):\n kwargs = self.style.get(\"data_property\", {})\n # annotation property\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n kwargs = self.style.get(\"annotation_property\", {})\n else:\n raise TypeError(f\"Unknown entity type: {entity!r}\")\n kwargs = kwargs.copy()\n kwargs.update(self.style.get(\"nodes\", {}).get(label, {}))\n if nodeattrs:\n kwargs.update(nodeattrs.get(label, {}))\n kwargs.update(attrs)\n return kwargs\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.get_relations","title":"get_relations(self, sort=True)
","text":"Returns a set of relations in current graph. If sort
is true, a sorted list is returned.
ontopy/graph.py
def get_relations(self, sort=True):\n \"\"\"Returns a set of relations in current graph. If `sort` is true,\n a sorted list is returned.\"\"\"\n relations = set()\n for _, predicate, _ in self.edges:\n if predicate.startswith(\"Inverse\"):\n relations.add(\"inverse\")\n match = re.match(r\"Inverse\\((.+)\\)\", predicate)\n if match is None:\n raise ValueError(\n \"Could unexpectedly not find the inverse relation \"\n f\"just added in: {predicate}\"\n )\n relations.add(match.groups()[0])\n else:\n relations.add(predicate.split(None, 1)[0])\n\n # Sort, but place 'isA' first and 'inverse' last\n if sort:\n start, end = [], []\n if \"isA\" in relations:\n relations.remove(\"isA\")\n start.append(\"isA\")\n if \"inverse\" in relations:\n relations.remove(\"inverse\")\n end.append(\"inverse\")\n relations = start + sorted(relations) + end\n\n return relations\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.save","title":"save(self, filename, fmt=None, **kwargs)
","text":"Saves graph to filename
. If format is not given, it is inferred from filename
.
ontopy/graph.py
def save(self, filename, fmt=None, **kwargs):\n \"\"\"Saves graph to `filename`. If format is not given, it is\n inferred from `filename`.\"\"\"\n base = os.path.splitext(filename)[0]\n fmt = get_format(filename, default=\"svg\", fmt=fmt)\n kwargs.setdefault(\"cleanup\", True)\n if fmt in (\"graphviz\", \"gv\"):\n if \"dictionary\" in kwargs:\n self.dot.save(filename, dictionary=kwargs[\"dictionary\"])\n else:\n self.dot.save(filename)\n else:\n fmt = kwargs.pop(\"format\", fmt)\n self.dot.render(base, format=fmt, **kwargs)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.OntoGraph.view","title":"view(self)
","text":"Shows the graph in a viewer.
Source code inontopy/graph.py
def view(self):\n \"\"\"Shows the graph in a viewer.\"\"\"\n self.dot.view(cleanup=True)\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.check_module_dependencies","title":"check_module_dependencies(modules, verbose=True)
","text":"Check module dependencies and return a copy of modules with redundant dependencies removed.
If verbose
is true, warnings are printed for each module that
If modules
is given, it should be a dict returned by get_module_dependencies().
ontopy/graph.py
def check_module_dependencies(modules, verbose=True):\n \"\"\"Check module dependencies and return a copy of modules with\n redundant dependencies removed.\n\n If `verbose` is true, warnings are printed for each module that\n\n If `modules` is given, it should be a dict returned by\n get_module_dependencies().\n \"\"\"\n visited = set()\n\n def get_deps(iri, excl=None):\n \"\"\"Returns a set with all dependencies of `iri`, excluding `excl` and\n its dependencies.\"\"\"\n if iri in visited:\n return set()\n visited.add(iri)\n deps = set()\n for dependency in modules[iri]:\n if dependency != excl:\n deps.add(dependency)\n deps.update(get_deps(dependency))\n return deps\n\n mods = {}\n redundant = []\n for iri, deps in modules.items():\n if not deps:\n mods[iri] = set()\n for dep in deps:\n if dep in get_deps(iri, dep):\n redundant.append((iri, dep))\n elif iri in mods:\n mods[iri].add(dep)\n else:\n mods[iri] = set([dep])\n\n if redundant and verbose:\n print(\"** Warning: Redundant module dependency:\")\n for iri, dep in redundant:\n print(f\"{iri} -> {dep}\")\n\n return mods\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.cytoscape_style","title":"cytoscape_style(style=None)
","text":"Get list of color, style and fills.
Source code inontopy/graph.py
def cytoscape_style(style=None): # pylint: disable=too-many-branches\n \"\"\"Get list of color, style and fills.\"\"\"\n if not style:\n style = _default_style\n colours = {}\n styles = {}\n fill = {}\n for key, value in style.items():\n if isinstance(value, dict):\n if \"color\" in value:\n colours[key] = value[\"color\"]\n else:\n colours[key] = \"black\"\n if \"style\" in value:\n styles[key] = value[\"style\"]\n else:\n styles[key] = \"solid\"\n if \"arrowhead\" in value:\n if value[\"arrowhead\"] == \"empty\":\n fill[key] = \"hollow\"\n else:\n fill[key] = \"filled\"\n\n for key, value in style.get(\"relations\", {}).items():\n if isinstance(value, dict):\n if \"color\" in value:\n colours[key] = value[\"color\"]\n else:\n colours[key] = \"black\"\n if \"style\" in value:\n styles[key] = value[\"style\"]\n else:\n styles[key] = \"solid\"\n if \"arrowhead\" in value:\n if value[\"arrowhead\"] == \"empty\":\n fill[key] = \"hollow\"\n else:\n fill[key] = \"filled\"\n return [colours, styles, fill]\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.cytoscapegraph","title":"cytoscapegraph(graph, onto=None, infobox=None, force=False)
","text":"Returns and instance of icytoscape-figure for an instance Graph of OntoGraph, the accompanying ontology is required for mouse actions.
Parameters:
Name Type Description Defaultgraph
OntoGraph
graph generated with OntoGraph with edgelabels=True.
requiredonto
Optional[ontopy.ontology.Ontology]
ontology to be used for mouse actions.
None
infobox
str
\"left\" or \"right\". Placement of infbox with respect to graph.
None
force
bool
force generate graph without correct edgelabels.
False
Returns:
Type DescriptionGridspecLayout
cytoscapewidget with graph and infobox to be visualized in jupyter lab.
Source code inontopy/graph.py
def cytoscapegraph(\n graph: OntoGraph,\n onto: Optional[Ontology] = None,\n infobox: str = None,\n force: bool = False,\n) -> \"GridspecLayout\":\n # pylint: disable=too-many-locals,too-many-statements\n \"\"\"Returns and instance of icytoscape-figure for an\n instance Graph of OntoGraph, the accompanying ontology\n is required for mouse actions.\n Args:\n graph: graph generated with OntoGraph with edgelabels=True.\n onto: ontology to be used for mouse actions.\n infobox: \"left\" or \"right\". Placement of infbox with\n respect to graph.\n force: force generate graph without correct edgelabels.\n Returns:\n cytoscapewidget with graph and infobox to be visualized\n in jupyter lab.\n\n \"\"\"\n # pylint: disable=import-error,import-outside-toplevel\n from ipywidgets import Output, VBox, GridspecLayout\n from IPython.display import display, Image\n from pathlib import Path\n import networkx as nx\n import pydotplus\n import ipycytoscape\n from networkx.readwrite.json_graph import cytoscape_data\n\n # Define the styles, this has to be aligned with the graphviz values\n dotplus = pydotplus.graph_from_dot_data(graph.dot.source)\n # if graph doesn't have multiedges, use dotplus.set_strict(true)\n pydot_graph = nx.nx_pydot.from_pydot(dotplus)\n\n colours, styles, fill = cytoscape_style()\n\n data = cytoscape_data(pydot_graph)[\"elements\"]\n for datum in data[\"edges\"]:\n try:\n datum[\"data\"][\"label\"] = (\n datum[\"data\"][\"label\"].rsplit(\" \", 1)[0].lstrip('\"')\n )\n except KeyError as err:\n if not force:\n raise EMMOntoPyException(\n \"Edge label is not defined. Are you sure that the OntoGraph\"\n \"instance you provided was generated with \"\n \"\u00b4edgelabels=True\u00b4?\"\n ) from err\n warnings.warn(\n \"ARROWS WILL NOT BE DISPLAYED CORRECTLY. \"\n \"Edge label is not defined. Are you sure that the OntoGraph \"\n \"instance you provided was generated with \u00b4edgelabels=True\u00b4?\"\n )\n datum[\"data\"][\"label\"] = \"\"\n\n lab = datum[\"data\"][\"label\"].replace(\"Inverse(\", \"\").rstrip(\")\")\n try:\n datum[\"data\"][\"colour\"] = colours[lab]\n except KeyError:\n datum[\"data\"][\"colour\"] = \"black\"\n try:\n datum[\"data\"][\"style\"] = styles[lab]\n except KeyError:\n datum[\"data\"][\"style\"] = \"solid\"\n if datum[\"data\"][\"label\"].startswith(\"Inverse(\"):\n datum[\"data\"][\"targetarrow\"] = \"diamond\"\n datum[\"data\"][\"sourcearrow\"] = \"none\"\n else:\n datum[\"data\"][\"targetarrow\"] = \"triangle\"\n datum[\"data\"][\"sourcearrow\"] = \"none\"\n try:\n datum[\"data\"][\"fill\"] = fill[lab]\n except KeyError:\n datum[\"data\"][\"fill\"] = \"filled\"\n\n cytofig = ipycytoscape.CytoscapeWidget()\n cytofig.graph.add_graph_from_json(data, directed=True)\n\n cytofig.set_style(\n [\n {\n \"selector\": \"node\",\n \"css\": {\n \"content\": \"data(label)\",\n # \"text-valign\": \"center\",\n # \"color\": \"white\",\n # \"text-outline-width\": 2,\n # \"text-outline-color\": \"red\",\n \"background-color\": \"blue\",\n },\n },\n {\"selector\": \"node:parent\", \"css\": {\"background-opacity\": 0.333}},\n {\n \"selector\": \"edge\",\n \"style\": {\n \"width\": 2,\n \"line-color\": \"data(colour)\",\n # \"content\": \"data(label)\"\",\n \"line-style\": \"data(style)\",\n },\n },\n {\n \"selector\": \"edge.directed\",\n \"style\": {\n \"curve-style\": \"bezier\",\n \"target-arrow-shape\": \"data(targetarrow)\",\n \"target-arrow-color\": \"data(colour)\",\n \"target-arrow-fill\": \"data(fill)\",\n \"mid-source-arrow-shape\": \"data(sourcearrow)\",\n \"mid-source-arrow-color\": \"data(colour)\",\n },\n },\n {\n \"selector\": \"edge.multiple_edges\",\n \"style\": {\"curve-style\": \"bezier\"},\n },\n {\n \"selector\": \":selected\",\n \"css\": {\n \"background-color\": \"black\",\n \"line-color\": \"black\",\n \"target-arrow-color\": \"black\",\n \"source-arrow-color\": \"black\",\n \"text-outline-color\": \"black\",\n },\n },\n ]\n )\n\n if onto is not None:\n out = Output(layout={\"border\": \"1px solid black\"})\n\n def log_clicks(node):\n with out:\n print((onto.get_by_label(node[\"data\"][\"label\"])))\n parent = onto.get_by_label(node[\"data\"][\"label\"]).get_parents()\n print(f\"parents: {parent}\")\n try:\n elucidation = onto.get_by_label(\n node[\"data\"][\"label\"]\n ).elucidation\n print(f\"elucidation: {elucidation[0]}\")\n except (AttributeError, IndexError):\n pass\n\n try:\n annotations = onto.get_by_label(\n node[\"data\"][\"label\"]\n ).annotations\n for _ in annotations:\n print(f\"annotation: {_}\")\n except AttributeError:\n pass\n\n # Try does not work...\n try:\n iri = onto.get_by_label(node[\"data\"][\"label\"]).iri\n print(f\"iri: {iri}\")\n except (AttributeError, IndexError):\n pass\n try:\n fig = node[\"data\"][\"label\"]\n if os.path.exists(Path(fig + \".png\")):\n display(Image(fig + \".png\", width=100))\n elif os.path.exists(Path(fig + \".jpg\")):\n display(Image(fig + \".jpg\", width=100))\n except (AttributeError, IndexError):\n pass\n out.clear_output(wait=True)\n\n def log_mouseovers(node):\n with out:\n print(onto.get_by_label(node[\"data\"][\"label\"]))\n # print(f'mouseover: {pformat(node)}')\n out.clear_output(wait=True)\n\n cytofig.on(\"node\", \"click\", log_clicks)\n cytofig.on(\"node\", \"mouseover\", log_mouseovers) # , remove=True)\n cytofig.on(\"node\", \"mouseout\", out.clear_output(wait=True))\n grid = GridspecLayout(1, 3, height=\"400px\")\n if infobox == \"left\":\n grid[0, 0] = out\n grid[0, 1:] = cytofig\n elif infobox == \"right\":\n grid[0, 0:-1] = cytofig\n grid[0, 2] = out\n else:\n return VBox([cytofig, out])\n return grid\n\n return cytofig\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.filter_classes","title":"filter_classes(classes, included_namespaces=(), included_ontologies=())
","text":"Filter out classes whos namespace is not in included_namespaces
or whos ontology name is not in one of the ontologies in included_ontologies
.
classes
should be a sequence of classes.
ontopy/graph.py
def filter_classes(classes, included_namespaces=(), included_ontologies=()):\n \"\"\"Filter out classes whos namespace is not in `included_namespaces`\n or whos ontology name is not in one of the ontologies in\n `included_ontologies`.\n\n `classes` should be a sequence of classes.\n \"\"\"\n filtered = set(classes)\n if included_namespaces:\n filtered = set(\n c for c in filtered if c.namespace.name in included_namespaces\n )\n if included_ontologies:\n filtered = set(\n c\n for c in filtered\n if c.namespace.ontology.name in included_ontologies\n )\n return filtered\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.get_module_dependencies","title":"get_module_dependencies(iri_or_onto, strip_base=None)
","text":"Reads iri_or_onto
and returns a dict mapping ontology names to a list of ontologies that they depends on.
If strip_base
is true, the base IRI is stripped from ontology names. If it is a string, it lstrip'ped from the base iri.
ontopy/graph.py
def get_module_dependencies(iri_or_onto, strip_base=None):\n \"\"\"Reads `iri_or_onto` and returns a dict mapping ontology names to a\n list of ontologies that they depends on.\n\n If `strip_base` is true, the base IRI is stripped from ontology\n names. If it is a string, it lstrip'ped from the base iri.\n \"\"\"\n from ontopy.ontology import ( # pylint: disable=import-outside-toplevel\n get_ontology,\n )\n\n if isinstance(iri_or_onto, str):\n onto = get_ontology(iri_or_onto)\n onto.load()\n else:\n onto = iri_or_onto\n\n modules = {onto.base_iri: set()}\n\n def strip(base_iri):\n if isinstance(strip_base, str):\n return base_iri.lstrip(strip_base)\n if strip_base:\n return base_iri.strip(onto.base_iri)\n return base_iri\n\n visited = set()\n\n def setmodules(onto):\n for imported_onto in onto.imported_ontologies:\n if onto.base_iri in modules:\n modules[strip(onto.base_iri)].add(strip(imported_onto.base_iri))\n else:\n modules[strip(onto.base_iri)] = set(\n [strip(imported_onto.base_iri)]\n )\n if imported_onto.base_iri not in modules:\n modules[strip(imported_onto.base_iri)] = set()\n if imported_onto not in visited:\n visited.add(imported_onto)\n setmodules(imported_onto)\n\n setmodules(onto)\n return modules\n
"},{"location":"api_reference/ontopy/graph/#ontopy.graph.plot_modules","title":"plot_modules(src, filename=None, fmt=None, show=False, strip_base=None, ignore_redundant=True)
","text":"Plot module dependency graph for src
and return a graph object.
Here src
may be an IRI, a path the the ontology or a dict returned by get_module_dependencies().
If filename
is given, write the graph to this file.
If fmt
is None, the output format is inferred from filename
.
If show
is true, the graph is displayed.
strip_base
is passed on to get_module_dependencies() if src
is not a dict.
If ignore_redundant
is true, redundant dependencies are not plotted.
ontopy/graph.py
def plot_modules( # pylint: disable=too-many-arguments\n src,\n filename=None,\n fmt=None,\n show=False,\n strip_base=None,\n ignore_redundant=True,\n):\n \"\"\"Plot module dependency graph for `src` and return a graph object.\n\n Here `src` may be an IRI, a path the the ontology or a dict returned by\n get_module_dependencies().\n\n If `filename` is given, write the graph to this file.\n\n If `fmt` is None, the output format is inferred from `filename`.\n\n If `show` is true, the graph is displayed.\n\n `strip_base` is passed on to get_module_dependencies() if `src` is not\n a dict.\n\n If `ignore_redundant` is true, redundant dependencies are not plotted.\n \"\"\"\n if isinstance(src, dict):\n modules = src\n else:\n modules = get_module_dependencies(src, strip_base=strip_base)\n\n if ignore_redundant:\n modules = check_module_dependencies(modules, verbose=False)\n\n dot = graphviz.Digraph(comment=\"Module dependencies\")\n dot.attr(rankdir=\"TB\")\n dot.node_attr.update(\n style=\"filled\", fillcolor=\"lightblue\", shape=\"box\", edgecolor=\"blue\"\n )\n dot.edge_attr.update(arrowtail=\"open\", dir=\"back\")\n\n for iri in modules.keys():\n iriname = iri.split(\":\", 1)[1]\n dot.node(iriname, label=iri, URL=iri)\n\n for iri, deps in modules.items():\n for dep in deps:\n iriname = iri.split(\":\", 1)[1]\n depname = dep.split(\":\", 1)[1]\n dot.edge(depname, iriname)\n\n if filename:\n base, ext = os.path.splitext(filename)\n if fmt is None:\n fmt = ext.lstrip(\".\")\n dot.render(base, format=fmt, view=False, cleanup=True)\n\n if show:\n dot.view(cleanup=True)\n\n return dot\n
"},{"location":"api_reference/ontopy/manchester/","title":"manchester","text":"Evaluate Manchester syntax
This module compiles restrictions and logical constructs in Manchester syntax into Owlready2 classes. The main function in this module is manchester.evaluate()
, see its docstring for usage example.
Pyparsing is used under the hood for parsing.
"},{"location":"api_reference/ontopy/manchester/#ontopy.manchester.ManchesterError","title":" ManchesterError (EMMOntoPyException)
","text":"Raised on invalid Manchester notation.
Source code inontopy/manchester.py
class ManchesterError(EMMOntoPyException):\n \"\"\"Raised on invalid Manchester notation.\"\"\"\n
"},{"location":"api_reference/ontopy/manchester/#ontopy.manchester.evaluate","title":"evaluate(ontology, expr)
","text":"Evaluate expression in Manchester syntax.
Parameters:
Name Type Description Defaultontology
Ontology
The ontology within which the expression will be evaluated.
requiredexpr
str
Manchester expression to be evaluated.
requiredReturns:
Type DescriptionConstruct
An Owlready2 construct that corresponds to the expression.
Examples:
from ontopy.manchester import evaluate from ontopy import get_ontology emmo = get_ontology().load()
restriction = evaluate(emmo, 'hasPart some Atom') cls = evaluate(emmo, 'Atom') expr = evaluate(emmo, 'Atom or Molecule')
Note
Logical expressions (with not
, and
and or
) are supported as well as object property restrictions. For data properterties are only value restrictions supported so far.
ontopy/manchester.py
def evaluate(ontology: owlready2.Ontology, expr: str) -> owlready2.Construct:\n \"\"\"Evaluate expression in Manchester syntax.\n\n Args:\n ontology: The ontology within which the expression will be evaluated.\n expr: Manchester expression to be evaluated.\n\n Returns:\n An Owlready2 construct that corresponds to the expression.\n\n Example:\n >>> from ontopy.manchester import evaluate\n >>> from ontopy import get_ontology\n >>> emmo = get_ontology().load()\n\n >>> restriction = evaluate(emmo, 'hasPart some Atom')\n >>> cls = evaluate(emmo, 'Atom')\n >>> expr = evaluate(emmo, 'Atom or Molecule')\n\n Note:\n Logical expressions (with `not`, `and` and `or`) are supported as\n well as object property restrictions. For data properterties are\n only value restrictions supported so far.\n \"\"\"\n\n # pylint: disable=invalid-name\n def _parse_literal(r):\n \"\"\"Compiles literal to Owlready2 type.\"\"\"\n if r.language:\n v = owlready2.locstr(r.string, r.language)\n elif r.number:\n v = r.number\n else:\n v = r.string\n return v\n\n # pylint: disable=invalid-name,no-else-return,too-many-return-statements\n # pylint: disable=too-many-branches\n def _eval(r):\n \"\"\"Recursively evaluate expression produced by pyparsing into an\n Owlready2 construct.\"\"\"\n\n def fneg(x):\n \"\"\"Negates the argument if `neg` is true.\"\"\"\n return owlready2.Not(x) if neg else x\n\n if isinstance(r, str): # r is atomic, returns its owlready2 repr\n return ontology[r]\n neg = False # whether the expression starts with \"not\"\n while r[0] == \"not\":\n r.pop(0) # strip off the \"not\" and proceed\n neg = not neg\n\n if len(r) == 1: # r is either a atomic or a parenthesised\n # subexpression that should be further evaluated\n if isinstance(r[0], str):\n return fneg(ontology[r[0]])\n else:\n return fneg(_eval(r[0]))\n elif r.op: # r contains a logical operator: and/or\n ops = {\"and\": owlready2.And, \"or\": owlready2.Or}\n op = ops[r.op]\n if len(r) == 3:\n return op([fneg(_eval(r[0])), _eval(r[2])])\n else:\n arg1 = fneg(_eval(r[0]))\n r.pop(0)\n r.pop(0)\n return op([arg1, _eval(r)])\n elif r.objProp: # r is a restriction\n if r[0] == \"inverse\":\n r.pop(0)\n prop = owlready2.Inverse(ontology[r[0]])\n else:\n prop = ontology[r[0]]\n rtype = r[1]\n if rtype == \"Self\":\n return fneg(prop.has_self())\n r.pop(0)\n r.pop(0)\n f = getattr(prop, rtype)\n if rtype == \"value\":\n return fneg(f(_eval(r)))\n elif rtype in (\"some\", \"only\"):\n return fneg(f(_eval(r)))\n elif rtype in (\"min\", \"max\", \"exactly\"):\n cardinality = r.pop(0)\n return fneg(f(cardinality, _eval(r)))\n else:\n raise ManchesterError(f\"invalid restriction type: {rtype}\")\n elif r.dataProp: # r is a data property restriction\n prop = ontology[r[0]]\n rtype = r[1]\n r.pop(0)\n r.pop(0)\n f = getattr(prop, rtype)\n if rtype == \"value\":\n return f(_parse_literal(r))\n else:\n raise ManchesterError(\n f\"unimplemented data property restriction: \"\n f\"{prop} {rtype} {r}\"\n )\n else:\n raise ManchesterError(f\"invalid expression: {r}\")\n\n grammar = manchester_expression()\n return _eval(grammar.parseString(expr, parseAll=True))\n
"},{"location":"api_reference/ontopy/manchester/#ontopy.manchester.manchester_expression","title":"manchester_expression()
","text":"Returns pyparsing grammar for a Manchester expression.
This function is mostly for internal use.
See also: https://www.w3.org/TR/owl2-manchester-syntax/
Source code inontopy/manchester.py
def manchester_expression():\n \"\"\"Returns pyparsing grammar for a Manchester expression.\n\n This function is mostly for internal use.\n\n See also: https://www.w3.org/TR/owl2-manchester-syntax/\n \"\"\"\n # pylint: disable=global-statement,invalid-name,too-many-locals\n global GRAMMAR\n if GRAMMAR:\n return GRAMMAR\n\n # Subset of the Manchester grammar for expressions\n # It is based on https://www.w3.org/TR/owl2-manchester-syntax/\n # but allows logical constructs within restrictions (like Protege)\n ident = pp.Word(pp.alphas + \"_:-\", pp.alphanums + \"_:-\", asKeyword=True)\n uint = pp.Word(pp.nums)\n alphas = pp.Word(pp.alphas)\n string = pp.Word(pp.alphanums + \":\")\n quotedString = (\n pp.QuotedString('\"\"\"', multiline=True) | pp.QuotedString('\"')\n )(\"string\")\n typedLiteral = pp.Combine(quotedString + \"^^\" + string(\"datatype\"))\n stringLanguageLiteral = pp.Combine(quotedString + \"@\" + alphas(\"language\"))\n stringLiteral = quotedString\n numberLiteral = pp.pyparsing_common.number(\"number\")\n literal = (\n typedLiteral | stringLanguageLiteral | stringLiteral | numberLiteral\n )\n logOp = pp.one_of([\"and\", \"or\"], asKeyword=True)\n expr = pp.Forward()\n restriction = pp.Forward()\n primary = pp.Keyword(\"not\")[...] + (\n restriction | ident(\"cls\") | pp.nested_expr(\"(\", \")\", expr)\n )\n objPropExpr = (\n pp.Literal(\"inverse\")\n + pp.Suppress(\"(\")\n + ident(\"objProp\")\n + pp.Suppress(\")\")\n | pp.Literal(\"inverse\") + ident(\"objProp\")\n | ident(\"objProp\")\n )\n dataPropExpr = ident(\"dataProp\")\n restriction <<= (\n objPropExpr + pp.Keyword(\"some\") + expr\n | objPropExpr + pp.Keyword(\"only\") + expr\n | objPropExpr + pp.Keyword(\"Self\")\n | objPropExpr + pp.Keyword(\"value\") + ident(\"individual\")\n | objPropExpr + pp.Keyword(\"min\") + uint + expr\n | objPropExpr + pp.Keyword(\"max\") + uint + expr\n | objPropExpr + pp.Keyword(\"exactly\") + uint + expr\n | dataPropExpr + pp.Keyword(\"value\") + literal\n )\n expr <<= primary + (logOp(\"op\") + expr)[...]\n\n GRAMMAR = expr\n return expr\n
"},{"location":"api_reference/ontopy/nadict/","title":"nadict","text":"A nested dict with both attribute and item access.
NA stands for Nested and Attribute.
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict","title":" NADict
","text":"A nested dict with both attribute and item access.
It is intended to be used with keys that are valid Python identifiers. However, except for string keys containing a dot, there are actually no hard limitations. If a key equals an existing attribute name, attribute access is of cause not possible.
Nested items can be accessed via a dot notation, as shown in the example below.
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict--examples","title":"Examples","text":"n = NADict(a=1, b=NADict(c=3, d=4)) n['a'] 1 n.a 1 n['b.c'] 3 n.b.c 3 n['b.e'] = 5 n.b.e 5
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict--attributes","title":"Attributes","text":"_dict : dict Dictionary holding the actial items.
Source code inontopy/nadict.py
class NADict:\n \"\"\"A nested dict with both attribute and item access.\n\n It is intended to be used with keys that are valid Python\n identifiers. However, except for string keys containing a dot,\n there are actually no hard limitations. If a key equals an existing\n attribute name, attribute access is of cause not possible.\n\n Nested items can be accessed via a dot notation, as shown in the\n example below.\n\n Examples\n --------\n >>> n = NADict(a=1, b=NADict(c=3, d=4))\n >>> n['a']\n 1\n >>> n.a\n 1\n >>> n['b.c']\n 3\n >>> n.b.c\n 3\n >>> n['b.e'] = 5\n >>> n.b.e\n 5\n\n Attributes\n ----------\n _dict : dict\n Dictionary holding the actial items.\n \"\"\"\n\n def __init__(self, *args, **kw):\n object.__setattr__(self, \"_dict\", {})\n self.update(*args, **kw)\n\n def __getitem__(self, key):\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1][key2]\n return self._dict[key]\n\n def __setitem__(self, key, value):\n if key in (\n \"clear\",\n \"copy\",\n \"fromkeys\",\n \"get\",\n \"items\",\n \"keys\",\n \"pop\",\n \"popitem\",\n \"setdefault\",\n \"update\",\n \"values\",\n ):\n raise ValueError(\n f\"invalid key {key!r}: must not override supported dict method\"\n \" names\"\n )\n\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n if key1 not in self._dict:\n self._dict[key1] = NADict()\n self._dict[key1][key2] = value\n elif key in self._dict:\n if isinstance(self._dict[key], NADict):\n self._dict[key].update(value)\n else:\n self._dict[key] = value\n else:\n if isinstance(value, Mapping):\n self._dict[key] = NADict(value)\n else:\n self._dict[key] = value\n\n def __delitem__(self, key):\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n del self._dict[key1][key2]\n else:\n del self._dict[key]\n\n def __getattr__(self, key):\n if key not in self._dict:\n raise AttributeError(f\"No such key: {key}\")\n return self._dict[key]\n\n def __setattr__(self, key, value):\n if key in self._dict:\n self._dict[key] = value\n else:\n object.__setattr__(self, key, value)\n\n def __delattr__(self, key):\n if key in self._dict:\n del self._dict[key]\n else:\n object.__delattr__(self, key)\n\n def __len__(self):\n return len(self._dict)\n\n def __contains__(self, key):\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return key2 in self._dict[key1]\n return key in self._dict\n\n def __iter__(self, prefix=\"\"):\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.__iter__(key)\n else:\n yield key\n\n def __repr__(self):\n return (\n f\"{self.__class__.__name__}(\"\n f\"{', '.join(f'{key}={value!r}' for key, value in self._dict.items())})\" # pylint: disable=line-too-long\n )\n\n def clear(self):\n \"\"\"Clear all keys.\"\"\"\n self._dict.clear()\n\n def copy(self):\n \"\"\"Returns a deep copy of self.\"\"\"\n return copy.deepcopy(self)\n\n @staticmethod\n def fromkeys(iterable, value=None):\n \"\"\"Returns a new NADict with keys from `iterable` and values\n set to `value`.\"\"\"\n res = NADict()\n for key in iterable:\n res[key] = value\n return res\n\n def get(self, key, default=None):\n \"\"\"Returns the value for `key` if `key` is in self, else return\n `default`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].get(key2, default)\n return self._dict.get(key, default)\n\n def items(self, prefix=\"\"):\n \"\"\"Returns an iterator over all items as (key, value) pairs.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.items(key)\n else:\n yield (key, value)\n\n def keys(self, prefix=\"\"):\n \"\"\"Returns an iterator over all keys.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.keys(key)\n else:\n yield key\n\n def pop(self, key, default=None):\n \"\"\"Removed `key` and returns corresponding value. If `key` is not\n found, `default` is returned if given, otherwise KeyError is\n raised.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].pop(key2, default)\n return self._dict.pop(key, default)\n\n def popitem(self, prefix=\"\"):\n \"\"\"Removes and returns some (key, value). Raises KeyError if empty.\"\"\"\n item = self._dict.popitem()\n if isinstance(item, NADict):\n key, value = item\n item2 = item.popitem(key)\n self._dict[key] = value\n return item2\n key, value = self._dict.popitem()\n key = f\"{prefix}.{key}\" if prefix else key\n return (key, value)\n\n def setdefault(self, key, value=None):\n \"\"\"Inserts `key` and `value` pair if key is not found.\n\n Returns the new value for `key`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].setdefault(key2, value)\n return self._dict.setdefault(key, value)\n\n def update(self, *args, **kwargs):\n \"\"\"Updates self with dict/iterable from `args` and keyword arguments\n from `kw`.\"\"\"\n for arg in args:\n if hasattr(arg, \"keys\"):\n for _ in arg:\n self[_] = arg[_]\n else:\n for key, value in arg:\n self[key] = value\n for key, value in kwargs.items():\n self[key] = value\n\n def values(self):\n \"\"\"Returns a set-like providing a view of all style values.\"\"\"\n return self._dict.values()\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.clear","title":"clear(self)
","text":"Clear all keys.
Source code inontopy/nadict.py
def clear(self):\n \"\"\"Clear all keys.\"\"\"\n self._dict.clear()\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.copy","title":"copy(self)
","text":"Returns a deep copy of self.
Source code inontopy/nadict.py
def copy(self):\n \"\"\"Returns a deep copy of self.\"\"\"\n return copy.deepcopy(self)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.fromkeys","title":"fromkeys(iterable, value=None)
staticmethod
","text":"Returns a new NADict with keys from iterable
and values set to value
.
ontopy/nadict.py
@staticmethod\ndef fromkeys(iterable, value=None):\n \"\"\"Returns a new NADict with keys from `iterable` and values\n set to `value`.\"\"\"\n res = NADict()\n for key in iterable:\n res[key] = value\n return res\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.get","title":"get(self, key, default=None)
","text":"Returns the value for key
if key
is in self, else return default
.
ontopy/nadict.py
def get(self, key, default=None):\n \"\"\"Returns the value for `key` if `key` is in self, else return\n `default`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].get(key2, default)\n return self._dict.get(key, default)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.items","title":"items(self, prefix='')
","text":"Returns an iterator over all items as (key, value) pairs.
Source code inontopy/nadict.py
def items(self, prefix=\"\"):\n \"\"\"Returns an iterator over all items as (key, value) pairs.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.items(key)\n else:\n yield (key, value)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.keys","title":"keys(self, prefix='')
","text":"Returns an iterator over all keys.
Source code inontopy/nadict.py
def keys(self, prefix=\"\"):\n \"\"\"Returns an iterator over all keys.\"\"\"\n for key, value in self._dict.items():\n key = f\"{prefix}.{key}\" if prefix else key\n if isinstance(value, NADict):\n yield from value.keys(key)\n else:\n yield key\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.pop","title":"pop(self, key, default=None)
","text":"Removed key
and returns corresponding value. If key
is not found, default
is returned if given, otherwise KeyError is raised.
ontopy/nadict.py
def pop(self, key, default=None):\n \"\"\"Removed `key` and returns corresponding value. If `key` is not\n found, `default` is returned if given, otherwise KeyError is\n raised.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].pop(key2, default)\n return self._dict.pop(key, default)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.popitem","title":"popitem(self, prefix='')
","text":"Removes and returns some (key, value). Raises KeyError if empty.
Source code inontopy/nadict.py
def popitem(self, prefix=\"\"):\n \"\"\"Removes and returns some (key, value). Raises KeyError if empty.\"\"\"\n item = self._dict.popitem()\n if isinstance(item, NADict):\n key, value = item\n item2 = item.popitem(key)\n self._dict[key] = value\n return item2\n key, value = self._dict.popitem()\n key = f\"{prefix}.{key}\" if prefix else key\n return (key, value)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.setdefault","title":"setdefault(self, key, value=None)
","text":"Inserts key
and value
pair if key is not found.
Returns the new value for key
.
ontopy/nadict.py
def setdefault(self, key, value=None):\n \"\"\"Inserts `key` and `value` pair if key is not found.\n\n Returns the new value for `key`.\"\"\"\n if \".\" in key:\n key1, key2 = key.split(\".\", 1)\n return self._dict[key1].setdefault(key2, value)\n return self._dict.setdefault(key, value)\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.update","title":"update(self, *args, **kwargs)
","text":"Updates self with dict/iterable from args
and keyword arguments from kw
.
ontopy/nadict.py
def update(self, *args, **kwargs):\n \"\"\"Updates self with dict/iterable from `args` and keyword arguments\n from `kw`.\"\"\"\n for arg in args:\n if hasattr(arg, \"keys\"):\n for _ in arg:\n self[_] = arg[_]\n else:\n for key, value in arg:\n self[key] = value\n for key, value in kwargs.items():\n self[key] = value\n
"},{"location":"api_reference/ontopy/nadict/#ontopy.nadict.NADict.values","title":"values(self)
","text":"Returns a set-like providing a view of all style values.
Source code inontopy/nadict.py
def values(self):\n \"\"\"Returns a set-like providing a view of all style values.\"\"\"\n return self._dict.values()\n
"},{"location":"api_reference/ontopy/ontodoc/","title":"ontodoc","text":"A module for documenting ontologies.
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.AttributeDict","title":" AttributeDict (dict)
","text":"A dict with attribute access.
Note that methods like key() and update() may be overridden.
Source code inontopy/ontodoc.py
class AttributeDict(dict):\n \"\"\"A dict with attribute access.\n\n Note that methods like key() and update() may be overridden.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.__dict__ = self\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP","title":" DocPP
","text":"Documentation pre-processor.
It supports the following features:
Comment lines
%% Comment line...\n
Insert header with given level
%HEADER label [level=1]\n
Insert figure with optional caption and width. filepath
should be relative to basedir
. If width is 0, no width will be specified.
%FIGURE filepath [caption='' width=0px]\n
Include other markdown files. Header levels may be up or down with shift
%INCLUDE filepath [shift=0]\n
Insert generated documentation for ontology entity. The header level may be set with header_level
.
%ENTITY name [header_level=3]\n
Insert generated documentation for ontology branch name
. Options:
include_leaves: Whether to include leaves as end points to the branch.
%BRANCH name [header_level=3 terminated=1 include_leaves=0 namespaces='' ontologies='']
Insert generated figure of ontology branch name
. The figure is written to path
. The default path is figdir
/name
, where figdir
is given at class initiation. It is recommended to exclude the file extension from path
. In this case, the default figformat will be used (and easily adjusted to the correct format required by the backend). leaves
may be a comma- separated list of leaf node names.
%BRANCHFIG name [path='' caption='' terminated=1 include_leaves=1\n strict_leaves=1, width=0px leaves='' relations=all\n edgelabels=0 namespaces='' ontologies='']\n
This is a combination of the %HEADER and %BRANCHFIG directives.
%BRANCHHEAD name [level=2 path='' caption='' terminated=1\n include_leaves=1 width=0px leaves='']\n
This is a combination of the %HEADER, %BRANCHFIG and %BRANCH directives. It inserts documentation of branch name
, with a header followed by a figure and then documentation of each element.
%BRANCHDOC name [level=2 path='' title='' caption='' terminated=1\n strict_leaves=1 width=0px leaves='' relations='all'\n rankdir='BT' legend=1 namespaces='' ontologies='']\n
Insert generated documentation for all entities of the given type. Valid values of type
are: \"classes\", \"individuals\", \"object_properties\", \"data_properties\", \"annotations_properties\"
%ALL type [header_level=3, namespaces='', ontologies='']\n
Insert generated figure of all entities of the given type. Valid values of type
are: \"classes\", \"object_properties\" and \"data_properties\".
%ALLFIG type\n
template : str Input template. ontodoc : OntoDoc instance Instance of OntoDoc basedir : str Base directory for including relative file paths. figdir : str Default directory to store generated figures. figformat : str Default format for generated figures. figscale : float Default scaling of generated figures. maxwidth : float Maximum figure width. Figures larger than this will be rescaled. imported : bool Whether to include imported entities.
Source code inontopy/ontodoc.py
class DocPP: # pylint: disable=too-many-instance-attributes\n \"\"\"Documentation pre-processor.\n\n It supports the following features:\n\n * Comment lines\n\n %% Comment line...\n\n * Insert header with given level\n\n %HEADER label [level=1]\n\n * Insert figure with optional caption and width. `filepath`\n should be relative to `basedir`. If width is 0, no width will\n be specified.\n\n %FIGURE filepath [caption='' width=0px]\n\n * Include other markdown files. Header levels may be up or down with\n `shift`\n\n %INCLUDE filepath [shift=0]\n\n * Insert generated documentation for ontology entity. The header\n level may be set with `header_level`.\n\n %ENTITY name [header_level=3]\n\n * Insert generated documentation for ontology branch `name`. Options:\n - header_level: Header level.\n - terminated: Whether to branch should be terminated at all branch\n names in the final document.\n - include_leaves: Whether to include leaves as end points\n to the branch.\n\n %BRANCH name [header_level=3 terminated=1 include_leaves=0\n namespaces='' ontologies='']\n\n * Insert generated figure of ontology branch `name`. The figure\n is written to `path`. The default path is `figdir`/`name`,\n where `figdir` is given at class initiation. It is recommended\n to exclude the file extension from `path`. In this case, the\n default figformat will be used (and easily adjusted to the\n correct format required by the backend). `leaves` may be a comma-\n separated list of leaf node names.\n\n %BRANCHFIG name [path='' caption='' terminated=1 include_leaves=1\n strict_leaves=1, width=0px leaves='' relations=all\n edgelabels=0 namespaces='' ontologies='']\n\n * This is a combination of the %HEADER and %BRANCHFIG directives.\n\n %BRANCHHEAD name [level=2 path='' caption='' terminated=1\n include_leaves=1 width=0px leaves='']\n\n * This is a combination of the %HEADER, %BRANCHFIG and %BRANCH\n directives. It inserts documentation of branch `name`, with a\n header followed by a figure and then documentation of each\n element.\n\n %BRANCHDOC name [level=2 path='' title='' caption='' terminated=1\n strict_leaves=1 width=0px leaves='' relations='all'\n rankdir='BT' legend=1 namespaces='' ontologies='']\n\n * Insert generated documentation for all entities of the given type.\n Valid values of `type` are: \"classes\", \"individuals\",\n \"object_properties\", \"data_properties\", \"annotations_properties\"\n\n %ALL type [header_level=3, namespaces='', ontologies='']\n\n * Insert generated figure of all entities of the given type.\n Valid values of `type` are: \"classes\", \"object_properties\" and\n \"data_properties\".\n\n %ALLFIG type\n\n Parameters\n ----------\n template : str\n Input template.\n ontodoc : OntoDoc instance\n Instance of OntoDoc\n basedir : str\n Base directory for including relative file paths.\n figdir : str\n Default directory to store generated figures.\n figformat : str\n Default format for generated figures.\n figscale : float\n Default scaling of generated figures.\n maxwidth : float\n Maximum figure width. Figures larger than this will be rescaled.\n imported : bool\n Whether to include imported entities.\n \"\"\"\n\n # FIXME - this class should be refractured:\n # * Instead of rescan the entire document for each pre-processer\n # directive, we should scan the source like by line and handle\n # each directive as they occour.\n # * The current implementation has a lot of dublicated code.\n # * Instead of modifying the source in-place, we should copy to a\n # result list. This will make good error reporting much easier.\n # * Branch leaves are only looked up in the file witht the %BRANCH\n # directive, not in all included files as expedted.\n\n def __init__( # pylint: disable=too-many-arguments\n self,\n template,\n ontodoc,\n basedir=\".\",\n figdir=\"genfigs\",\n figformat=\"png\",\n figscale=1.0,\n maxwidth=None,\n imported=False,\n ):\n self.lines = template.split(\"\\n\")\n self.ontodoc = ontodoc\n self.basedir = basedir\n self.figdir = os.path.join(basedir, figdir)\n self.figformat = figformat\n self.figscale = figscale\n self.maxwidth = maxwidth\n self.imported = imported\n self._branch_cache = None\n self._processed = False # Whether process() has been called\n\n def __str__(self):\n return self.get_buffer()\n\n def get_buffer(self):\n \"\"\"Returns the current buffer.\"\"\"\n return \"\\n\".join(self.lines)\n\n def copy(self):\n \"\"\"Returns a copy of self.\"\"\"\n docpp = DocPP(\n \"\",\n self.ontodoc,\n self.basedir,\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.lines[:] = self.lines\n docpp.figdir = self.figdir\n return docpp\n\n def get_branches(self):\n \"\"\"Returns a list with all branch names as specified with %BRANCH\n (in current and all included documents). The returned value is\n cached for efficiency purposes and so that it is not lost after\n processing branches.\"\"\"\n if self._branch_cache is None:\n names = []\n docpp = self.copy()\n docpp.process_includes()\n for line in docpp.lines:\n if line.startswith(\"%BRANCH\"):\n names.append(shlex.split(line)[1])\n self._branch_cache = names\n return self._branch_cache\n\n def shift_header_levels(self, shift):\n \"\"\"Shift header level of all hashtag-headers in buffer. Underline\n headers are ignored.\"\"\"\n if not shift:\n return\n pat = re.compile(\"^#+ \")\n for i, line in enumerate(self.lines):\n match = pat.match(line)\n if match:\n if shift > 0:\n self.lines[i] = \"#\" * shift + line\n elif shift < 0:\n counter = match.end()\n if shift > counter:\n self.lines[i] = line.lstrip(\"# \")\n else:\n self.lines[i] = line[counter:]\n\n def process_comments(self):\n \"\"\"Strips out comment lines starting with \"%%\".\"\"\"\n self.lines = [line for line in self.lines if not line.startswith(\"%%\")]\n\n def process_headers(self):\n \"\"\"Expand all %HEADER specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%HEADER \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], level=1)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_header(\n name, int(opts.level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_figures(self):\n \"\"\"Expand all %FIGURE specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%FIGURE \"):\n tokens = shlex.split(line)\n path = tokens[1]\n opts = get_options(tokens[2:], caption=\"\", width=0)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n os.path.join(self.basedir, path),\n caption=opts.caption, # pylint: disable=no-member\n width=opts.width, # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_entities(self):\n \"\"\"Expand all %ENTITY specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ENTITY \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemdoc(\n name, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_branches(self):\n \"\"\"Expand all %BRANCH specifications.\"\"\"\n onto = self.ontodoc.onto\n\n # Get all branch names in final document\n names = self.get_branches()\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCH \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n header_level=3,\n terminated=1,\n include_leaves=0,\n namespaces=\"\",\n ontologies=\"\",\n )\n leaves = (\n names if opts.terminated else ()\n ) # pylint: disable=no-member\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n branch = filter_classes(\n onto.get_branch(\n name, leaves, opts.include_leaves\n ), # pylint: disable=no-member\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n branch, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def _make_branchfig( # pylint: disable=too-many-arguments,too-many-locals\n self,\n name: str,\n path: \"Union[Path, str]\",\n terminated: bool,\n include_leaves: bool,\n strict_leaves: bool,\n width: float,\n leaves: \"Union[str, list[str]]\",\n relations: str,\n edgelabels: str,\n rankdir: str,\n legend: bool,\n included_namespaces: \"Iterable[str]\",\n included_ontologies: \"Iterable[str]\",\n ) -> \"tuple[str, list[str], float]\":\n \"\"\"Help method for process_branchfig().\n\n Args:\n name: name of branch root\n path: optional figure path name\n include_leaves: whether to include leaves as end points\n to the branch.\n strict_leaves: whether to strictly exclude leave descendants\n terminated: whether the graph should be terminated at leaf nodes\n width: optional figure width\n leaves: optional leaf node names for graph termination\n relations: comma-separated list of relations to include\n edgelabels: whether to include edgelabels\n rankdir: graph direction (BT, TB, RL, LR)\n legend: whether to add legend\n included_namespaces: sequence of names of namespaces to be included\n included_ontologies: sequence of names of ontologies to be included\n\n Returns:\n filepath: path to generated figure\n leaves: used list of leaf node names\n width: actual figure width\n\n \"\"\"\n onto = self.ontodoc.onto\n if leaves:\n if isinstance(leaves, str):\n leaves = leaves.split(\",\")\n elif terminated:\n leaves = set(self.get_branches())\n leaves.discard(name)\n else:\n leaves = None\n if path:\n figdir = os.path.dirname(path)\n formatext = os.path.splitext(path)[1]\n if formatext:\n fmt = formatext.lstrip(\".\")\n else:\n fmt = self.figformat\n path += f\".{fmt}\"\n else:\n figdir = self.figdir\n fmt = self.figformat\n term = \"T\" if terminated else \"\"\n path = os.path.join(figdir, name + term) + f\".{fmt}\"\n\n # Create graph\n graph = OntoGraph(onto, graph_attr={\"rankdir\": rankdir})\n graph.add_branch(\n root=name,\n leaves=leaves,\n include_leaves=include_leaves,\n strict_leaves=strict_leaves,\n relations=relations,\n edgelabels=edgelabels,\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n if legend:\n graph.add_legend()\n\n if not width:\n figwidth, _ = graph.get_figsize()\n width = self.figscale * figwidth\n if self.maxwidth and width > self.maxwidth:\n width = self.maxwidth\n\n filepath = os.path.join(self.basedir, path)\n destdir = os.path.dirname(filepath)\n if not os.path.exists(destdir):\n os.makedirs(destdir)\n graph.save(filepath, fmt=fmt)\n return filepath, leaves, width\n\n def process_branchfigs(self):\n \"\"\"Process all %BRANCHFIG directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHFIG \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n caption=\"\",\n terminated=1,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_branchdocs(self): # pylint: disable=too-many-locals\n \"\"\"Process all %BRANCHDOC and %BRANCHEAD directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHDOC \") or line.startswith(\n \"%BRANCHHEAD \"\n ):\n with_branch = bool(line.startswith(\"%BRANCHDOC \"))\n tokens = shlex.split(line)\n name = tokens[1]\n title = camelsplit(name)\n title = title[0].upper() + title[1:] + \" branch\"\n opts = get_options(\n tokens[2:],\n level=2,\n path=\"\",\n title=title,\n caption=title + \".\",\n terminated=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n include_leaves = 1\n filepath, leaves, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n include_leaves,\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n sec = []\n sec.append(\n self.ontodoc.get_header(opts.title, int(opts.level))\n ) # pylint: disable=no-member\n sec.append(\n self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n )\n )\n if with_branch:\n include_leaves = 0\n branch = filter_classes(\n onto.get_branch(name, leaves, include_leaves),\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n sec.append(\n self.ontodoc.itemsdoc(\n branch, int(opts.level + 1)\n ) # pylint: disable=no-member\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n\n def process_alls(self):\n \"\"\"Expand all %ALL specifications.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALL \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n if token == \"classes\": # nosec\n items = onto.classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n items = onto.object_properties(imported=self.imported)\n elif token == \"data_properties\": # nosec\n items = onto.data_properties(imported=self.imported)\n elif token == \"annotation_properties\": # nosec\n items = onto.annotation_properties(imported=self.imported)\n elif token == \"individuals\": # nosec\n items = onto.individuals(imported=self.imported)\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALL: {token}\"\n )\n items = sorted(items, key=get_label)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n items, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n\n def process_allfig(self): # pylint: disable=too-many-locals\n \"\"\"Process all %ALLFIG directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALLFIG \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n level=3,\n terminated=0,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"isA\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n if token == \"classes\": # nosec\n roots = onto.get_root_classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n roots = onto.get_root_object_properties(\n imported=self.imported\n )\n elif token == \"data_properties\": # nosec\n roots = onto.get_root_data_properties(\n imported=self.imported\n )\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALLFIG: {token}\"\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n sec = []\n for root in roots:\n name = asstring(root, link=\"{label}\", ontology=onto)\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n title = f\"Taxonomy of {name}.\"\n sec.append(\n self.ontodoc.get_header(title, int(opts.level))\n ) # pylint: disable=no-member\n sec.extend(\n self.ontodoc.get_figure(\n filepath, caption=title, width=width\n ).split(\"\\n\")\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n\n def process_includes(self):\n \"\"\"Process all %INCLUDE directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%INCLUDE \"):\n tokens = shlex.split(line)\n filepath = tokens[1]\n opts = get_options(tokens[2:], shift=0)\n with open(\n os.path.join(self.basedir, filepath), \"rt\", encoding=\"utf8\"\n ) as handle:\n docpp = DocPP(\n handle.read(),\n self.ontodoc,\n basedir=os.path.dirname(filepath),\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.figdir = self.figdir\n if opts.shift: # pylint: disable=no-member\n docpp.shift_header_levels(\n int(opts.shift)\n ) # pylint: disable=no-member\n docpp.process()\n del self.lines[i]\n self.lines[i:i] = docpp.lines\n\n def process(self):\n \"\"\"Perform all pre-processing steps.\"\"\"\n if not self._processed:\n self.process_comments()\n self.process_headers()\n self.process_figures()\n self.process_entities()\n self.process_branches()\n self.process_branchfigs()\n self.process_branchdocs()\n self.process_alls()\n self.process_allfig()\n self.process_includes()\n self._processed = True\n\n def write( # pylint: disable=too-many-arguments\n self,\n outfile,\n fmt=None,\n pandoc_option_files=(),\n pandoc_options=(),\n genfile=None,\n verbose=True,\n ):\n \"\"\"Writes documentation to `outfile`.\n\n Parameters\n ----------\n outfile : str\n File that the documentation is written to.\n fmt : str\n Output format. If it is \"md\" or \"simple-html\",\n the built-in template generator is used. Otherwise\n pandoc is used. If not given, the format is inferred\n from the `outfile` name extension.\n pandoc_option_files : sequence\n Sequence with command line arguments provided to pandoc.\n pandoc_options : sequence\n Additional pandoc options overriding options read from\n `pandoc_option_files`.\n genfile : str\n Store temporary generated markdown input file to pandoc\n to this file (for debugging).\n verbose : bool\n Whether to show some messages when running pandoc.\n \"\"\"\n self.process()\n content = self.get_buffer()\n\n substitutions = self.ontodoc.style.get(\"substitutions\", [])\n for reg, sub in substitutions:\n content = re.sub(reg, sub, content)\n\n fmt = get_format(outfile, default=\"html\", fmt=fmt)\n if fmt not in (\"simple-html\", \"markdown\", \"md\"): # Run pandoc\n if not genfile:\n with NamedTemporaryFile(mode=\"w+t\", suffix=\".md\") as temp_file:\n temp_file.write(content)\n temp_file.flush()\n genfile = temp_file.name\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n with open(genfile, \"wt\") as handle:\n handle.write(content)\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n if verbose:\n print(\"Writing:\", outfile)\n with open(outfile, \"wt\") as handle:\n handle.write(content)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.copy","title":"copy(self)
","text":"Returns a copy of self.
Source code inontopy/ontodoc.py
def copy(self):\n \"\"\"Returns a copy of self.\"\"\"\n docpp = DocPP(\n \"\",\n self.ontodoc,\n self.basedir,\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.lines[:] = self.lines\n docpp.figdir = self.figdir\n return docpp\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.get_branches","title":"get_branches(self)
","text":"Returns a list with all branch names as specified with %BRANCH (in current and all included documents). The returned value is cached for efficiency purposes and so that it is not lost after processing branches.
Source code inontopy/ontodoc.py
def get_branches(self):\n \"\"\"Returns a list with all branch names as specified with %BRANCH\n (in current and all included documents). The returned value is\n cached for efficiency purposes and so that it is not lost after\n processing branches.\"\"\"\n if self._branch_cache is None:\n names = []\n docpp = self.copy()\n docpp.process_includes()\n for line in docpp.lines:\n if line.startswith(\"%BRANCH\"):\n names.append(shlex.split(line)[1])\n self._branch_cache = names\n return self._branch_cache\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.get_buffer","title":"get_buffer(self)
","text":"Returns the current buffer.
Source code inontopy/ontodoc.py
def get_buffer(self):\n \"\"\"Returns the current buffer.\"\"\"\n return \"\\n\".join(self.lines)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process","title":"process(self)
","text":"Perform all pre-processing steps.
Source code inontopy/ontodoc.py
def process(self):\n \"\"\"Perform all pre-processing steps.\"\"\"\n if not self._processed:\n self.process_comments()\n self.process_headers()\n self.process_figures()\n self.process_entities()\n self.process_branches()\n self.process_branchfigs()\n self.process_branchdocs()\n self.process_alls()\n self.process_allfig()\n self.process_includes()\n self._processed = True\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_allfig","title":"process_allfig(self)
","text":"Process all %ALLFIG directives.
Source code inontopy/ontodoc.py
def process_allfig(self): # pylint: disable=too-many-locals\n \"\"\"Process all %ALLFIG directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALLFIG \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n level=3,\n terminated=0,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"isA\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n if token == \"classes\": # nosec\n roots = onto.get_root_classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n roots = onto.get_root_object_properties(\n imported=self.imported\n )\n elif token == \"data_properties\": # nosec\n roots = onto.get_root_data_properties(\n imported=self.imported\n )\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALLFIG: {token}\"\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n sec = []\n for root in roots:\n name = asstring(root, link=\"{label}\", ontology=onto)\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n title = f\"Taxonomy of {name}.\"\n sec.append(\n self.ontodoc.get_header(title, int(opts.level))\n ) # pylint: disable=no-member\n sec.extend(\n self.ontodoc.get_figure(\n filepath, caption=title, width=width\n ).split(\"\\n\")\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_alls","title":"process_alls(self)
","text":"Expand all %ALL specifications.
Source code inontopy/ontodoc.py
def process_alls(self):\n \"\"\"Expand all %ALL specifications.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ALL \"):\n tokens = shlex.split(line)\n token = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n if token == \"classes\": # nosec\n items = onto.classes(imported=self.imported)\n elif token in (\"object_properties\", \"relations\"):\n items = onto.object_properties(imported=self.imported)\n elif token == \"data_properties\": # nosec\n items = onto.data_properties(imported=self.imported)\n elif token == \"annotation_properties\": # nosec\n items = onto.annotation_properties(imported=self.imported)\n elif token == \"individuals\": # nosec\n items = onto.individuals(imported=self.imported)\n else:\n raise InvalidTemplateError(\n f\"Invalid argument to %%ALL: {token}\"\n )\n items = sorted(items, key=get_label)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n items, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_branchdocs","title":"process_branchdocs(self)
","text":"Process all %BRANCHDOC and %BRANCHEAD directives.
Source code inontopy/ontodoc.py
def process_branchdocs(self): # pylint: disable=too-many-locals\n \"\"\"Process all %BRANCHDOC and %BRANCHEAD directives.\"\"\"\n onto = self.ontodoc.onto\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHDOC \") or line.startswith(\n \"%BRANCHHEAD \"\n ):\n with_branch = bool(line.startswith(\"%BRANCHDOC \"))\n tokens = shlex.split(line)\n name = tokens[1]\n title = camelsplit(name)\n title = title[0].upper() + title[1:] + \" branch\"\n opts = get_options(\n tokens[2:],\n level=2,\n path=\"\",\n title=title,\n caption=title + \".\",\n terminated=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n include_leaves = 1\n filepath, leaves, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n include_leaves,\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n sec = []\n sec.append(\n self.ontodoc.get_header(opts.title, int(opts.level))\n ) # pylint: disable=no-member\n sec.append(\n self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n )\n )\n if with_branch:\n include_leaves = 0\n branch = filter_classes(\n onto.get_branch(name, leaves, include_leaves),\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n sec.append(\n self.ontodoc.itemsdoc(\n branch, int(opts.level + 1)\n ) # pylint: disable=no-member\n )\n\n del self.lines[i]\n self.lines[i:i] = sec\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_branches","title":"process_branches(self)
","text":"Expand all %BRANCH specifications.
Source code inontopy/ontodoc.py
def process_branches(self):\n \"\"\"Expand all %BRANCH specifications.\"\"\"\n onto = self.ontodoc.onto\n\n # Get all branch names in final document\n names = self.get_branches()\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCH \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n header_level=3,\n terminated=1,\n include_leaves=0,\n namespaces=\"\",\n ontologies=\"\",\n )\n leaves = (\n names if opts.terminated else ()\n ) # pylint: disable=no-member\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n branch = filter_classes(\n onto.get_branch(\n name, leaves, opts.include_leaves\n ), # pylint: disable=no-member\n included_namespaces=included_namespaces,\n included_ontologies=included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemsdoc(\n branch, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_branchfigs","title":"process_branchfigs(self)
","text":"Process all %BRANCHFIG directives.
Source code inontopy/ontodoc.py
def process_branchfigs(self):\n \"\"\"Process all %BRANCHFIG directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%BRANCHFIG \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(\n tokens[2:],\n path=\"\",\n caption=\"\",\n terminated=1,\n include_leaves=1,\n strict_leaves=1,\n width=0,\n leaves=\"\",\n relations=\"all\",\n edgelabels=0,\n rankdir=\"BT\",\n legend=1,\n namespaces=\"\",\n ontologies=\"\",\n )\n\n included_namespaces = (\n opts.namespaces.split(\",\")\n if opts.namespaces\n else () # pylint: disable=no-member\n )\n included_ontologies = (\n opts.ontologies.split(\",\")\n if opts.ontologies\n else () # pylint: disable=no-member\n )\n\n filepath, _, width = self._make_branchfig(\n name,\n opts.path, # pylint: disable=no-member\n opts.terminated, # pylint: disable=no-member\n opts.include_leaves, # pylint: disable=no-member\n opts.strict_leaves, # pylint: disable=no-member\n opts.width, # pylint: disable=no-member\n opts.leaves, # pylint: disable=no-member\n opts.relations, # pylint: disable=no-member\n opts.edgelabels, # pylint: disable=no-member\n opts.rankdir, # pylint: disable=no-member\n opts.legend, # pylint: disable=no-member\n included_namespaces,\n included_ontologies,\n )\n\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n filepath,\n caption=opts.caption,\n width=width, # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_comments","title":"process_comments(self)
","text":"Strips out comment lines starting with \"%%\".
Source code inontopy/ontodoc.py
def process_comments(self):\n \"\"\"Strips out comment lines starting with \"%%\".\"\"\"\n self.lines = [line for line in self.lines if not line.startswith(\"%%\")]\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_entities","title":"process_entities(self)
","text":"Expand all %ENTITY specifications.
Source code inontopy/ontodoc.py
def process_entities(self):\n \"\"\"Expand all %ENTITY specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%ENTITY \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], header_level=3)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.itemdoc(\n name, int(opts.header_level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_figures","title":"process_figures(self)
","text":"Expand all %FIGURE specifications.
Source code inontopy/ontodoc.py
def process_figures(self):\n \"\"\"Expand all %FIGURE specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%FIGURE \"):\n tokens = shlex.split(line)\n path = tokens[1]\n opts = get_options(tokens[2:], caption=\"\", width=0)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_figure(\n os.path.join(self.basedir, path),\n caption=opts.caption, # pylint: disable=no-member\n width=opts.width, # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_headers","title":"process_headers(self)
","text":"Expand all %HEADER specifications.
Source code inontopy/ontodoc.py
def process_headers(self):\n \"\"\"Expand all %HEADER specifications.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%HEADER \"):\n tokens = shlex.split(line)\n name = tokens[1]\n opts = get_options(tokens[2:], level=1)\n del self.lines[i]\n self.lines[i:i] = self.ontodoc.get_header(\n name, int(opts.level) # pylint: disable=no-member\n ).split(\"\\n\")\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.process_includes","title":"process_includes(self)
","text":"Process all %INCLUDE directives.
Source code inontopy/ontodoc.py
def process_includes(self):\n \"\"\"Process all %INCLUDE directives.\"\"\"\n for i, line in reversed(list(enumerate(self.lines))):\n if line.startswith(\"%INCLUDE \"):\n tokens = shlex.split(line)\n filepath = tokens[1]\n opts = get_options(tokens[2:], shift=0)\n with open(\n os.path.join(self.basedir, filepath), \"rt\", encoding=\"utf8\"\n ) as handle:\n docpp = DocPP(\n handle.read(),\n self.ontodoc,\n basedir=os.path.dirname(filepath),\n figformat=self.figformat,\n figscale=self.figscale,\n maxwidth=self.maxwidth,\n )\n docpp.figdir = self.figdir\n if opts.shift: # pylint: disable=no-member\n docpp.shift_header_levels(\n int(opts.shift)\n ) # pylint: disable=no-member\n docpp.process()\n del self.lines[i]\n self.lines[i:i] = docpp.lines\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.shift_header_levels","title":"shift_header_levels(self, shift)
","text":"Shift header level of all hashtag-headers in buffer. Underline headers are ignored.
Source code inontopy/ontodoc.py
def shift_header_levels(self, shift):\n \"\"\"Shift header level of all hashtag-headers in buffer. Underline\n headers are ignored.\"\"\"\n if not shift:\n return\n pat = re.compile(\"^#+ \")\n for i, line in enumerate(self.lines):\n match = pat.match(line)\n if match:\n if shift > 0:\n self.lines[i] = \"#\" * shift + line\n elif shift < 0:\n counter = match.end()\n if shift > counter:\n self.lines[i] = line.lstrip(\"# \")\n else:\n self.lines[i] = line[counter:]\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.DocPP.write","title":"write(self, outfile, fmt=None, pandoc_option_files=(), pandoc_options=(), genfile=None, verbose=True)
","text":"Writes documentation to outfile
.
outfile : str File that the documentation is written to. fmt : str Output format. If it is \"md\" or \"simple-html\", the built-in template generator is used. Otherwise pandoc is used. If not given, the format is inferred from the outfile
name extension. pandoc_option_files : sequence Sequence with command line arguments provided to pandoc. pandoc_options : sequence Additional pandoc options overriding options read from pandoc_option_files
. genfile : str Store temporary generated markdown input file to pandoc to this file (for debugging). verbose : bool Whether to show some messages when running pandoc.
ontopy/ontodoc.py
def write( # pylint: disable=too-many-arguments\n self,\n outfile,\n fmt=None,\n pandoc_option_files=(),\n pandoc_options=(),\n genfile=None,\n verbose=True,\n):\n \"\"\"Writes documentation to `outfile`.\n\n Parameters\n ----------\n outfile : str\n File that the documentation is written to.\n fmt : str\n Output format. If it is \"md\" or \"simple-html\",\n the built-in template generator is used. Otherwise\n pandoc is used. If not given, the format is inferred\n from the `outfile` name extension.\n pandoc_option_files : sequence\n Sequence with command line arguments provided to pandoc.\n pandoc_options : sequence\n Additional pandoc options overriding options read from\n `pandoc_option_files`.\n genfile : str\n Store temporary generated markdown input file to pandoc\n to this file (for debugging).\n verbose : bool\n Whether to show some messages when running pandoc.\n \"\"\"\n self.process()\n content = self.get_buffer()\n\n substitutions = self.ontodoc.style.get(\"substitutions\", [])\n for reg, sub in substitutions:\n content = re.sub(reg, sub, content)\n\n fmt = get_format(outfile, default=\"html\", fmt=fmt)\n if fmt not in (\"simple-html\", \"markdown\", \"md\"): # Run pandoc\n if not genfile:\n with NamedTemporaryFile(mode=\"w+t\", suffix=\".md\") as temp_file:\n temp_file.write(content)\n temp_file.flush()\n genfile = temp_file.name\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n with open(genfile, \"wt\") as handle:\n handle.write(content)\n\n run_pandoc(\n genfile,\n outfile,\n fmt,\n pandoc_option_files=pandoc_option_files,\n pandoc_options=pandoc_options,\n verbose=verbose,\n )\n else:\n if verbose:\n print(\"Writing:\", outfile)\n with open(outfile, \"wt\") as handle:\n handle.write(content)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.InvalidTemplateError","title":" InvalidTemplateError (NameError)
","text":"Raised on errors in template files.
Source code inontopy/ontodoc.py
class InvalidTemplateError(NameError):\n \"\"\"Raised on errors in template files.\"\"\"\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc","title":" OntoDoc
","text":"A class for helping documentating ontologies.
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc--parameters","title":"Parameters","text":"onto : Ontology instance The ontology that should be documented. style : dict | \"html\" | \"markdown\" | \"markdown_tex\" A dict defining the following template strings (and substitutions):
:header: Formats an header.\n Substitutions: {level}, {label}\n:link: Formats a link.\n Substitutions: {name}\n:point: Formats a point (list item).\n Substitutions: {point}, {ontology}\n:points: Formats a list of points. Used within annotations.\n Substitutions: {points}, {ontology}\n:annotation: Formats an annotation.\n Substitutions: {key}, {value}, {ontology}\n:substitutions: list of ``(regex, sub)`` pairs for substituting\n annotation values.\n
Source code in ontopy/ontodoc.py
class OntoDoc:\n \"\"\"A class for helping documentating ontologies.\n\n Parameters\n ----------\n onto : Ontology instance\n The ontology that should be documented.\n style : dict | \"html\" | \"markdown\" | \"markdown_tex\"\n A dict defining the following template strings (and substitutions):\n\n :header: Formats an header.\n Substitutions: {level}, {label}\n :link: Formats a link.\n Substitutions: {name}\n :point: Formats a point (list item).\n Substitutions: {point}, {ontology}\n :points: Formats a list of points. Used within annotations.\n Substitutions: {points}, {ontology}\n :annotation: Formats an annotation.\n Substitutions: {key}, {value}, {ontology}\n :substitutions: list of ``(regex, sub)`` pairs for substituting\n annotation values.\n \"\"\"\n\n _markdown_style = {\n \"sep\": \"\\n\",\n \"figwidth\": \"{{ width={width:.0f}px }}\",\n \"figure\": \"![{caption}]({path}){figwidth}\\n\",\n \"header\": \"\\n{:#<{level}} {label} {{#{anchor}}}\",\n # Use ref instead of iri for local references in links\n \"link\": \"[{label}]({ref})\",\n \"point\": \" - {point}\\n\",\n \"points\": \"\\n\\n{points}\\n\",\n \"annotation\": \"**{key}:** {value}\\n\",\n \"substitutions\": [],\n }\n # Extra style settings for markdown+tex (e.g. pdf generation with pandoc)\n _markdown_tex_extra_style = {\n \"substitutions\": [\n # logic/math symbols\n (\"\\u2200\", r\"$\\\\forall$\"),\n (\"\\u2203\", r\"$\\\\exists$\"),\n (\"\\u2206\", r\"$\\\\nabla$\"),\n (\"\\u2227\", r\"$\\\\land$\"),\n (\"\\u2228\", r\"$\\\\lor$\"),\n (\"\\u2207\", r\"$\\\\nabla$\"),\n (\"\\u2212\", r\"-\"),\n (\"->\", r\"$\\\\rightarrow$\"),\n # uppercase greek letters\n (\"\\u0391\", r\"$\\\\Upalpha$\"),\n (\"\\u0392\", r\"$\\\\Upbeta$\"),\n (\"\\u0393\", r\"$\\\\Upgamma$\"),\n (\"\\u0394\", r\"$\\\\Updelta$\"),\n (\"\\u0395\", r\"$\\\\Upepsilon$\"),\n (\"\\u0396\", r\"$\\\\Upzeta$\"),\n (\"\\u0397\", r\"$\\\\Upeta$\"),\n (\"\\u0398\", r\"$\\\\Uptheta$\"),\n (\"\\u0399\", r\"$\\\\Upiota$\"),\n (\"\\u039a\", r\"$\\\\Upkappa$\"),\n (\"\\u039b\", r\"$\\\\Uplambda$\"),\n (\"\\u039c\", r\"$\\\\Upmu$\"),\n (\"\\u039d\", r\"$\\\\Upnu$\"),\n (\"\\u039e\", r\"$\\\\Upxi$\"),\n (\"\\u039f\", r\"$\\\\Upomekron$\"),\n (\"\\u03a0\", r\"$\\\\Uppi$\"),\n (\"\\u03a1\", r\"$\\\\Uprho$\"),\n (\"\\u03a3\", r\"$\\\\Upsigma$\"), # no \\u0302\n (\"\\u03a4\", r\"$\\\\Uptau$\"),\n (\"\\u03a5\", r\"$\\\\Upupsilon$\"),\n (\"\\u03a6\", r\"$\\\\Upvarphi$\"),\n (\"\\u03a7\", r\"$\\\\Upchi$\"),\n (\"\\u03a8\", r\"$\\\\Uppsi$\"),\n (\"\\u03a9\", r\"$\\\\Upomega$\"),\n # lowercase greek letters\n (\"\\u03b1\", r\"$\\\\upalpha$\"),\n (\"\\u03b2\", r\"$\\\\upbeta$\"),\n (\"\\u03b3\", r\"$\\\\upgamma$\"),\n (\"\\u03b4\", r\"$\\\\updelta$\"),\n (\"\\u03b5\", r\"$\\\\upepsilon$\"),\n (\"\\u03b6\", r\"$\\\\upzeta$\"),\n (\"\\u03b7\", r\"$\\\\upeta$\"),\n (\"\\u03b8\", r\"$\\\\uptheta$\"),\n (\"\\u03b9\", r\"$\\\\upiota$\"),\n (\"\\u03ba\", r\"$\\\\upkappa$\"),\n (\"\\u03bb\", r\"$\\\\uplambda$\"),\n (\"\\u03bc\", r\"$\\\\upmu$\"),\n (\"\\u03bd\", r\"$\\\\upnu$\"),\n (\"\\u03be\", r\"$\\\\upxi$\"),\n (\"\\u03bf\", r\"o\"), # no \\upomicron\n (\"\\u03c0\", r\"$\\\\uppi$\"),\n (\"\\u03c1\", r\"$\\\\uprho$\"),\n (\"\\u03c2\", r\"$\\\\upvarsigma$\"),\n (\"\\u03c3\", r\"$\\\\upsigma$\"),\n (\"\\u03c4\", r\"$\\\\uptau$\"),\n (\"\\u03c5\", r\"$\\\\upupsilon$\"),\n (\"\\u03c6\", r\"$\\\\upvarphi$\"),\n (\"\\u03c7\", r\"$\\\\upchi$\"),\n (\"\\u03c8\", r\"$\\\\uppsi$\"),\n (\"\\u03c9\", r\"$\\\\upomega$\"),\n # acutes, accents, etc...\n (\"\\u03ae\", r\"$\\\\acute{\\\\upeta}$\"),\n (\"\\u1e17\", r\"$\\\\acute{\\\\bar{\\\\mathrm{e}}}$\"),\n (\"\\u03ac\", r\"$\\\\acute{\\\\upalpha}$\"),\n (\"\\u00e1\", r\"$\\\\acute{\\\\mathrm{a}}$\"),\n (\"\\u03cc\", r\"$\\\\acute{o}$\"), # no \\upomicron\n (\"\\u014d\", r\"$\\\\bar{\\\\mathrm{o}}$\"),\n (\"\\u1f45\", r\"$\\\\acute{o}$\"), # no \\omicron\n ],\n }\n _html_style = {\n \"sep\": \"<p>\\n\",\n \"figwidth\": 'width=\"{width:.0f}\"',\n \"figure\": '<img src=\"{path}\" alt=\"{caption}\"{figwidth}>',\n \"header\": '<h{level} id=\"{anchor}\">{label}</h{level}>',\n \"link\": '<a href=\"{ref}\">{label}</a>',\n \"point\": \" <li>{point}</li>\\n\",\n \"points\": \" <ul>\\n {points}\\n </ul>\\n\",\n \"annotation\": \" <dd><strong>{key}:</strong>\\n{value} </dd>\\n\",\n \"substitutions\": [\n (r\"&\", r\"‒\"),\n (r\"<p>\", r\"<p>\\n\\n\"),\n (r\"\\u2018([^\\u2019]*)\\u2019\", r\"<q>\\1</q>\"),\n (r\"\\u2019\", r\"'\"),\n (r\"\\u2260\", r\"≠\"),\n (r\"\\u2264\", r\"≤\"),\n (r\"\\u2265\", r\"≥\"),\n (r\"\\u226A\", r\"&x226A;\"),\n (r\"\\u226B\", r\"&x226B;\"),\n (r'\"Y$', r\"\"), # strange noice added by owlready2\n ],\n }\n\n def __init__(self, onto, style=\"markdown\"):\n if isinstance(style, str):\n if style == \"markdown_tex\":\n style = self._markdown_style.copy()\n style.update(self._markdown_tex_extra_style)\n else:\n style = getattr(self, f\"_{style}_style\")\n self.onto = onto\n self.style = style\n self.url_regex = re.compile(r\"https?:\\/\\/[^\\s ]+\")\n\n def get_default_template(self):\n \"\"\"Returns default template.\"\"\"\n title = os.path.splitext(\n os.path.basename(self.onto.base_iri.rstrip(\"/#\"))\n )[0]\n irilink = self.style.get(\"link\", \"{name}\").format(\n iri=self.onto.base_iri,\n name=self.onto.base_iri,\n ref=self.onto.base_iri,\n label=self.onto.base_iri,\n lowerlabel=self.onto.base_iri,\n )\n template = dedent(\n \"\"\"\\\n %HEADER {title}\n Documentation of {irilink}\n\n %HEADER Relations level=2\n %ALL object_properties\n\n %HEADER Classes level=2\n %ALL classes\n\n %HEADER Individuals level=2\n %ALL individuals\n\n %HEADER Appendix level=1\n %HEADER \"Relation taxonomies\" level=2\n %ALLFIG object_properties\n\n %HEADER \"Class taxonomies\" level=2\n %ALLFIG classes\n \"\"\"\n ).format(ontology=self.onto, title=title, irilink=irilink)\n return template\n\n def get_header(self, label, header_level=1, anchor=None):\n \"\"\"Returns `label` formatted as a header of given level.\"\"\"\n header_style = self.style.get(\"header\", \"{label}\\n\")\n return header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor if anchor else label.lower().replace(\" \", \"-\"),\n )\n\n def get_figure(self, path, caption=\"\", width=None):\n \"\"\"Returns a formatted insert-figure-directive.\"\"\"\n figwidth_style = self.style.get(\"figwidth\", \"\")\n figure_style = self.style.get(\"figure\", \"\")\n figwidth = figwidth_style.format(width=width) if width else \"\"\n return figure_style.format(\n path=path, caption=caption, figwidth=figwidth\n )\n\n def itemdoc(\n self, item, header_level=3, show_disjoints=False\n ): # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n \"\"\"Returns documentation of `item`.\n\n Parameters\n ----------\n item : obj | label\n The class, individual or relation to document.\n header_level : int\n Header level. Defaults to 3.\n show_disjoints : Bool\n Whether to show `disjoint_with` relations.\n \"\"\"\n onto = self.onto\n if isinstance(item, str):\n item = self.onto.get_by_label(item)\n\n header_style = self.style.get(\"header\", \"{label}\\n\")\n link_style = self.style.get(\"link\", \"{name}\")\n point_style = self.style.get(\"point\", \"{point}\")\n points_style = self.style.get(\"points\", \"{points}\")\n annotation_style = self.style.get(\"annotation\", \"{key}: {value}\\n\")\n substitutions = self.style.get(\"substitutions\", [])\n\n # Logical \"sorting\" of annotations\n order = {\n \"definition\": \"00\",\n \"axiom\": \"01\",\n \"theorem\": \"02\",\n \"elucidation\": \"03\",\n \"domain\": \"04\",\n \"range\": \"05\",\n \"example\": \"06\",\n }\n\n doc = []\n\n # Header\n label = get_label(item)\n iriname = item.iri.partition(\"#\")[2]\n anchor = iriname if iriname else label.lower()\n doc.append(\n header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor,\n )\n )\n\n # Add warning about missing prefLabel\n if not hasattr(item, \"prefLabel\") or not item.prefLabel.first():\n doc.append(\n annotation_style.format(\n key=\"Warning\", value=\"Missing prefLabel\"\n )\n )\n\n # Add iri\n doc.append(\n annotation_style.format(\n key=\"IRI\",\n value=asstring(item.iri, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # Add annotations\n if isinstance(item, owlready2.Thing):\n annotations = item.get_individual_annotations()\n else:\n annotations = item.get_annotations()\n\n for key in sorted(\n annotations.keys(), key=lambda key: order.get(key, key)\n ):\n for value in annotations[key]:\n value = str(value)\n if self.url_regex.match(value):\n doc.append(\n annotation_style.format(\n key=key,\n value=asstring(value, link_style, ontology=onto),\n )\n )\n else:\n for reg, sub in substitutions:\n value = re.sub(reg, sub, value)\n doc.append(annotation_style.format(key=key, value=value))\n\n # ...add relations from is_a\n points = []\n non_prop = (\n owlready2.ThingClass, # owlready2.Restriction,\n owlready2.And,\n owlready2.Or,\n owlready2.Not,\n )\n for prop in item.is_a:\n if isinstance(prop, non_prop) or (\n isinstance(item, owlready2.PropertyClass)\n and isinstance(prop, owlready2.PropertyClass)\n ):\n points.append(\n point_style.format(\n point=\"is_a \"\n + asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n else:\n points.append(\n point_style.format(\n point=asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # ...add equivalent_to relations\n for entity in item.equivalent_to:\n points.append(\n point_style.format(\n point=\"equivalent_to \"\n + asstring(entity, link_style, ontology=onto)\n )\n )\n\n # ...add disjoint_with relations\n if show_disjoints and hasattr(item, \"disjoint_with\"):\n subjects = set(item.disjoint_with(reduce=True))\n points.append(\n point_style.format(\n point=\"disjoint_with \"\n + \", \".join(\n asstring(s, link_style, ontology=onto) for s in subjects\n ),\n ontology=onto,\n )\n )\n\n # ...add disjoint_unions\n if hasattr(item, \"disjoint_unions\"):\n for unions in item.disjoint_unions:\n string = \", \".join(\n asstring(u, link_style, ontology=onto) for u in unions\n )\n points.append(\n point_style.format(\n point=f\"disjoint_union_of {string}\", ontology=onto\n )\n )\n\n # ...add inverse_of relations\n if hasattr(item, \"inverse_property\") and item.inverse_property:\n points.append(\n point_style.format(\n point=\"inverse_of \"\n + asstring(item.inverse_property, link_style, ontology=onto)\n )\n )\n\n # ...add domain restrictions\n for domain in getattr(item, \"domain\", ()):\n points.append(\n point_style.format(\n point=\"domain \"\n + asstring(domain, link_style, ontology=onto)\n )\n )\n\n # ...add range restrictions\n for restriction in getattr(item, \"range\", ()):\n points.append(\n point_style.format(\n point=\"range \"\n + asstring(restriction, link_style, ontology=onto)\n )\n )\n\n # Add points (from is_a)\n if points:\n value = points_style.format(points=\"\".join(points), ontology=onto)\n doc.append(\n annotation_style.format(\n key=\"Subclass of\", value=value, ontology=onto\n )\n )\n\n # Instances (individuals)\n if hasattr(item, \"instances\"):\n points = []\n\n for instance in item.instances():\n if isinstance(instance.is_instance_of, property):\n warnings.warn(\n f'Ignoring instance \"{instance}\" which is both and '\n \"indivudual and class. Ontodoc does not support \"\n \"punning at the present moment.\"\n )\n continue\n if item in instance.is_instance_of:\n points.append(\n point_style.format(\n point=asstring(instance, link_style, ontology=onto),\n ontology=onto,\n )\n )\n if points:\n value = points_style.format(\n points=\"\".join(points), ontology=onto\n )\n doc.append(\n annotation_style.format(\n key=\"Individuals\", value=value, ontology=onto\n )\n )\n\n return \"\\n\".join(doc)\n\n def itemsdoc(self, items, header_level=3):\n \"\"\"Returns documentation of `items`.\"\"\"\n sep_style = self.style.get(\"sep\", \"\\n\")\n doc = []\n for item in items:\n doc.append(self.itemdoc(item, header_level))\n doc.append(sep_style.format(ontology=self.onto))\n return \"\\n\".join(doc)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.get_default_template","title":"get_default_template(self)
","text":"Returns default template.
Source code inontopy/ontodoc.py
def get_default_template(self):\n \"\"\"Returns default template.\"\"\"\n title = os.path.splitext(\n os.path.basename(self.onto.base_iri.rstrip(\"/#\"))\n )[0]\n irilink = self.style.get(\"link\", \"{name}\").format(\n iri=self.onto.base_iri,\n name=self.onto.base_iri,\n ref=self.onto.base_iri,\n label=self.onto.base_iri,\n lowerlabel=self.onto.base_iri,\n )\n template = dedent(\n \"\"\"\\\n %HEADER {title}\n Documentation of {irilink}\n\n %HEADER Relations level=2\n %ALL object_properties\n\n %HEADER Classes level=2\n %ALL classes\n\n %HEADER Individuals level=2\n %ALL individuals\n\n %HEADER Appendix level=1\n %HEADER \"Relation taxonomies\" level=2\n %ALLFIG object_properties\n\n %HEADER \"Class taxonomies\" level=2\n %ALLFIG classes\n \"\"\"\n ).format(ontology=self.onto, title=title, irilink=irilink)\n return template\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.get_figure","title":"get_figure(self, path, caption='', width=None)
","text":"Returns a formatted insert-figure-directive.
Source code inontopy/ontodoc.py
def get_figure(self, path, caption=\"\", width=None):\n \"\"\"Returns a formatted insert-figure-directive.\"\"\"\n figwidth_style = self.style.get(\"figwidth\", \"\")\n figure_style = self.style.get(\"figure\", \"\")\n figwidth = figwidth_style.format(width=width) if width else \"\"\n return figure_style.format(\n path=path, caption=caption, figwidth=figwidth\n )\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.get_header","title":"get_header(self, label, header_level=1, anchor=None)
","text":"Returns label
formatted as a header of given level.
ontopy/ontodoc.py
def get_header(self, label, header_level=1, anchor=None):\n \"\"\"Returns `label` formatted as a header of given level.\"\"\"\n header_style = self.style.get(\"header\", \"{label}\\n\")\n return header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor if anchor else label.lower().replace(\" \", \"-\"),\n )\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.itemdoc","title":"itemdoc(self, item, header_level=3, show_disjoints=False)
","text":"Returns documentation of item
.
item : obj | label The class, individual or relation to document. header_level : int Header level. Defaults to 3. show_disjoints : Bool Whether to show disjoint_with
relations.
ontopy/ontodoc.py
def itemdoc(\n self, item, header_level=3, show_disjoints=False\n): # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n \"\"\"Returns documentation of `item`.\n\n Parameters\n ----------\n item : obj | label\n The class, individual or relation to document.\n header_level : int\n Header level. Defaults to 3.\n show_disjoints : Bool\n Whether to show `disjoint_with` relations.\n \"\"\"\n onto = self.onto\n if isinstance(item, str):\n item = self.onto.get_by_label(item)\n\n header_style = self.style.get(\"header\", \"{label}\\n\")\n link_style = self.style.get(\"link\", \"{name}\")\n point_style = self.style.get(\"point\", \"{point}\")\n points_style = self.style.get(\"points\", \"{points}\")\n annotation_style = self.style.get(\"annotation\", \"{key}: {value}\\n\")\n substitutions = self.style.get(\"substitutions\", [])\n\n # Logical \"sorting\" of annotations\n order = {\n \"definition\": \"00\",\n \"axiom\": \"01\",\n \"theorem\": \"02\",\n \"elucidation\": \"03\",\n \"domain\": \"04\",\n \"range\": \"05\",\n \"example\": \"06\",\n }\n\n doc = []\n\n # Header\n label = get_label(item)\n iriname = item.iri.partition(\"#\")[2]\n anchor = iriname if iriname else label.lower()\n doc.append(\n header_style.format(\n \"\",\n level=header_level,\n label=label,\n anchor=anchor,\n )\n )\n\n # Add warning about missing prefLabel\n if not hasattr(item, \"prefLabel\") or not item.prefLabel.first():\n doc.append(\n annotation_style.format(\n key=\"Warning\", value=\"Missing prefLabel\"\n )\n )\n\n # Add iri\n doc.append(\n annotation_style.format(\n key=\"IRI\",\n value=asstring(item.iri, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # Add annotations\n if isinstance(item, owlready2.Thing):\n annotations = item.get_individual_annotations()\n else:\n annotations = item.get_annotations()\n\n for key in sorted(\n annotations.keys(), key=lambda key: order.get(key, key)\n ):\n for value in annotations[key]:\n value = str(value)\n if self.url_regex.match(value):\n doc.append(\n annotation_style.format(\n key=key,\n value=asstring(value, link_style, ontology=onto),\n )\n )\n else:\n for reg, sub in substitutions:\n value = re.sub(reg, sub, value)\n doc.append(annotation_style.format(key=key, value=value))\n\n # ...add relations from is_a\n points = []\n non_prop = (\n owlready2.ThingClass, # owlready2.Restriction,\n owlready2.And,\n owlready2.Or,\n owlready2.Not,\n )\n for prop in item.is_a:\n if isinstance(prop, non_prop) or (\n isinstance(item, owlready2.PropertyClass)\n and isinstance(prop, owlready2.PropertyClass)\n ):\n points.append(\n point_style.format(\n point=\"is_a \"\n + asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n else:\n points.append(\n point_style.format(\n point=asstring(prop, link_style, ontology=onto),\n ontology=onto,\n )\n )\n\n # ...add equivalent_to relations\n for entity in item.equivalent_to:\n points.append(\n point_style.format(\n point=\"equivalent_to \"\n + asstring(entity, link_style, ontology=onto)\n )\n )\n\n # ...add disjoint_with relations\n if show_disjoints and hasattr(item, \"disjoint_with\"):\n subjects = set(item.disjoint_with(reduce=True))\n points.append(\n point_style.format(\n point=\"disjoint_with \"\n + \", \".join(\n asstring(s, link_style, ontology=onto) for s in subjects\n ),\n ontology=onto,\n )\n )\n\n # ...add disjoint_unions\n if hasattr(item, \"disjoint_unions\"):\n for unions in item.disjoint_unions:\n string = \", \".join(\n asstring(u, link_style, ontology=onto) for u in unions\n )\n points.append(\n point_style.format(\n point=f\"disjoint_union_of {string}\", ontology=onto\n )\n )\n\n # ...add inverse_of relations\n if hasattr(item, \"inverse_property\") and item.inverse_property:\n points.append(\n point_style.format(\n point=\"inverse_of \"\n + asstring(item.inverse_property, link_style, ontology=onto)\n )\n )\n\n # ...add domain restrictions\n for domain in getattr(item, \"domain\", ()):\n points.append(\n point_style.format(\n point=\"domain \"\n + asstring(domain, link_style, ontology=onto)\n )\n )\n\n # ...add range restrictions\n for restriction in getattr(item, \"range\", ()):\n points.append(\n point_style.format(\n point=\"range \"\n + asstring(restriction, link_style, ontology=onto)\n )\n )\n\n # Add points (from is_a)\n if points:\n value = points_style.format(points=\"\".join(points), ontology=onto)\n doc.append(\n annotation_style.format(\n key=\"Subclass of\", value=value, ontology=onto\n )\n )\n\n # Instances (individuals)\n if hasattr(item, \"instances\"):\n points = []\n\n for instance in item.instances():\n if isinstance(instance.is_instance_of, property):\n warnings.warn(\n f'Ignoring instance \"{instance}\" which is both and '\n \"indivudual and class. Ontodoc does not support \"\n \"punning at the present moment.\"\n )\n continue\n if item in instance.is_instance_of:\n points.append(\n point_style.format(\n point=asstring(instance, link_style, ontology=onto),\n ontology=onto,\n )\n )\n if points:\n value = points_style.format(\n points=\"\".join(points), ontology=onto\n )\n doc.append(\n annotation_style.format(\n key=\"Individuals\", value=value, ontology=onto\n )\n )\n\n return \"\\n\".join(doc)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.OntoDoc.itemsdoc","title":"itemsdoc(self, items, header_level=3)
","text":"Returns documentation of items
.
ontopy/ontodoc.py
def itemsdoc(self, items, header_level=3):\n \"\"\"Returns documentation of `items`.\"\"\"\n sep_style = self.style.get(\"sep\", \"\\n\")\n doc = []\n for item in items:\n doc.append(self.itemdoc(item, header_level))\n doc.append(sep_style.format(ontology=self.onto))\n return \"\\n\".join(doc)\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.append_pandoc_options","title":"append_pandoc_options(options, updates)
","text":"Append updates
to pandoc options options
.
options : sequence Sequence with initial Pandoc options. updates : sequence of str Sequence of strings of the form \"--longoption=value\", where longoption
is a valid pandoc long option and value
is the new value. The \"=value\" part is optional.
Strings of the form \"no-longoption\" will filter out \"--longoption\"\nfrom `options`.\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.append_pandoc_options--returns","title":"Returns","text":"new_options : list Updated pandoc options.
Source code inontopy/ontodoc.py
def append_pandoc_options(options, updates):\n \"\"\"Append `updates` to pandoc options `options`.\n\n Parameters\n ----------\n options : sequence\n Sequence with initial Pandoc options.\n updates : sequence of str\n Sequence of strings of the form \"--longoption=value\", where\n ``longoption`` is a valid pandoc long option and ``value`` is the\n new value. The \"=value\" part is optional.\n\n Strings of the form \"no-longoption\" will filter out \"--longoption\"\n from `options`.\n\n Returns\n -------\n new_options : list\n Updated pandoc options.\n \"\"\"\n # Valid pandoc options starting with \"--no-XXX\"\n no_options = set(\"no-highlight\")\n\n if not updates:\n return list(options)\n\n curated_updates = {}\n for update in updates:\n key, sep, value = update.partition(\"=\")\n curated_updates[key.lstrip(\"-\")] = value if sep else None\n filter_out = set(\n _\n for _ in curated_updates\n if _.startswith(\"no-\") and _ not in no_options\n )\n _filter_out = set(f\"--{_[3:]}\" for _ in filter_out)\n new_options = [\n opt for opt in options if opt.partition(\"=\")[0] not in _filter_out\n ]\n new_options.extend(\n [\n f\"--{key}\" if value is None else f\"--{key}={value}\"\n for key, value in curated_updates.items()\n if key not in filter_out\n ]\n )\n return new_options\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_docpp","title":"get_docpp(ontodoc, infile, figdir='genfigs', figformat='png', maxwidth=None, imported=False)
","text":"Read infile
and return a new docpp instance.
ontopy/ontodoc.py
def get_docpp( # pylint: disable=too-many-arguments\n ontodoc,\n infile,\n figdir=\"genfigs\",\n figformat=\"png\",\n maxwidth=None,\n imported=False,\n):\n \"\"\"Read `infile` and return a new docpp instance.\"\"\"\n if infile:\n with open(infile, \"rt\") as handle:\n template = handle.read()\n basedir = os.path.dirname(infile)\n else:\n template = ontodoc.get_default_template()\n basedir = \".\"\n\n docpp = DocPP(\n template,\n ontodoc,\n basedir=basedir,\n figdir=figdir,\n figformat=figformat,\n maxwidth=maxwidth,\n imported=imported,\n )\n\n return docpp\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_figformat","title":"get_figformat(fmt)
","text":"Infer preferred figure format from output format.
Source code inontopy/ontodoc.py
def get_figformat(fmt):\n \"\"\"Infer preferred figure format from output format.\"\"\"\n if fmt == \"pdf\":\n figformat = \"pdf\" # XXX\n elif \"html\" in fmt:\n figformat = \"svg\"\n else:\n figformat = \"png\"\n return figformat\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_maxwidth","title":"get_maxwidth(fmt)
","text":"Infer preferred max figure width from output format.
Source code inontopy/ontodoc.py
def get_maxwidth(fmt):\n \"\"\"Infer preferred max figure width from output format.\"\"\"\n if fmt == \"pdf\":\n maxwidth = 668\n else:\n maxwidth = 1024\n return maxwidth\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_options","title":"get_options(opts, **kwargs)
","text":"Returns a dict with options from the sequence opts
with \"name=value\" pairs. Valid option names and default values are provided with the keyword arguments.
ontopy/ontodoc.py
def get_options(opts, **kwargs):\n \"\"\"Returns a dict with options from the sequence `opts` with\n \"name=value\" pairs. Valid option names and default values are\n provided with the keyword arguments.\"\"\"\n res = AttributeDict(kwargs)\n for opt in opts:\n if \"=\" not in opt:\n raise InvalidTemplateError(\n f'Missing \"=\" in template option: {opt!r}'\n )\n name, value = opt.split(\"=\", 1)\n if name not in res:\n raise InvalidTemplateError(f\"Invalid template option: {name!r}\")\n res_type = type(res[name])\n res[name] = res_type(value)\n return res\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.get_style","title":"get_style(fmt)
","text":"Infer style from output format.
Source code inontopy/ontodoc.py
def get_style(fmt):\n \"\"\"Infer style from output format.\"\"\"\n if fmt == \"simple-html\":\n style = \"html\"\n elif fmt in (\"tex\", \"latex\", \"pdf\"):\n style = \"markdown_tex\"\n else:\n style = \"markdown\"\n return style\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.load_pandoc_option_file","title":"load_pandoc_option_file(yamlfile)
","text":"Loads pandoc options from yamlfile
and return a list with corresponding pandoc command line arguments.
ontopy/ontodoc.py
def load_pandoc_option_file(yamlfile):\n \"\"\"Loads pandoc options from `yamlfile` and return a list with\n corresponding pandoc command line arguments.\"\"\"\n with open(yamlfile) as handle:\n pandoc_options = yaml.safe_load(handle)\n options = pandoc_options.pop(\"input-files\", [])\n variables = pandoc_options.pop(\"variables\", {})\n\n for key, value in pandoc_options.items():\n if isinstance(value, bool):\n if value:\n options.append(f\"--{key}\")\n else:\n options.append(f\"--{key}={value}\")\n\n for key, value in variables.items():\n if key == \"date\" and value == \"now\":\n value = time.strftime(\"%B %d, %Y\")\n options.append(f\"--variable={key}:{value}\")\n\n return options\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.run_pandoc","title":"run_pandoc(genfile, outfile, fmt, pandoc_option_files=(), pandoc_options=(), verbose=True)
","text":"Runs pandoc.
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.run_pandoc--parameters","title":"Parameters","text":"genfile : str Name of markdown input file. outfile : str Output file name. fmt : str Output format. pandoc_option_files : sequence List of files with additional pandoc options. Default is to read \"pandoc-options.yaml\" and \"pandoc-FORMAT-options.yml\", where FORMAT
is the output format. pandoc_options : sequence Additional pandoc options overriding options read from pandoc_option_files
. verbose : bool Whether to print the pandoc command before execution.
subprocess.CalledProcessError If the pandoc process returns with non-zero status. The returncode
attribute will hold the exit code.
ontopy/ontodoc.py
def run_pandoc( # pylint: disable=too-many-arguments\n genfile,\n outfile,\n fmt,\n pandoc_option_files=(),\n pandoc_options=(),\n verbose=True,\n):\n \"\"\"Runs pandoc.\n\n Parameters\n ----------\n genfile : str\n Name of markdown input file.\n outfile : str\n Output file name.\n fmt : str\n Output format.\n pandoc_option_files : sequence\n List of files with additional pandoc options. Default is to read\n \"pandoc-options.yaml\" and \"pandoc-FORMAT-options.yml\", where\n `FORMAT` is the output format.\n pandoc_options : sequence\n Additional pandoc options overriding options read from\n `pandoc_option_files`.\n verbose : bool\n Whether to print the pandoc command before execution.\n\n Raises\n ------\n subprocess.CalledProcessError\n If the pandoc process returns with non-zero status. The `returncode`\n attribute will hold the exit code.\n \"\"\"\n # Create pandoc argument list\n args = [genfile]\n files = [\"pandoc-options.yaml\", f\"pandoc-{fmt}-options.yaml\"]\n if pandoc_option_files:\n files = pandoc_option_files\n for fname in files:\n if os.path.exists(fname):\n args.extend(load_pandoc_option_file(fname))\n else:\n warnings.warn(f\"missing pandoc option file: {fname}\")\n\n # Update pandoc argument list\n args = append_pandoc_options(args, pandoc_options)\n\n # pdf output requires a special attention...\n if fmt == \"pdf\":\n pdf_engine = \"pdflatex\"\n for arg in args:\n if arg.startswith(\"--pdf-engine\"):\n pdf_engine = arg.split(\"=\", 1)[1]\n break\n with TemporaryDirectory() as tmpdir:\n run_pandoc_pdf(tmpdir, pdf_engine, outfile, args, verbose=verbose)\n else:\n args.append(f\"--output={outfile}\")\n cmd = [\"pandoc\"] + args\n if verbose:\n print()\n print(\"* Executing command:\")\n print(\" \".join(shlex.quote(_) for _ in cmd))\n subprocess.check_call(cmd) # nosec\n
"},{"location":"api_reference/ontopy/ontodoc/#ontopy.ontodoc.run_pandoc_pdf","title":"run_pandoc_pdf(latex_dir, pdf_engine, outfile, args, verbose=True)
","text":"Run pandoc for pdf generation.
Source code inontopy/ontodoc.py
def run_pandoc_pdf(latex_dir, pdf_engine, outfile, args, verbose=True):\n \"\"\"Run pandoc for pdf generation.\"\"\"\n basename = os.path.join(\n latex_dir, os.path.splitext(os.path.basename(outfile))[0]\n )\n\n # Run pandoc\n texfile = basename + \".tex\"\n args.append(f\"--output={texfile}\")\n cmd = [\"pandoc\"] + args\n if verbose:\n print()\n print(\"* Executing commands:\")\n print(\" \".join(shlex.quote(s) for s in cmd))\n subprocess.check_call(cmd) # nosec\n\n # Fixing tex output\n texfile2 = basename + \"2.tex\"\n with open(texfile, \"rt\") as handle:\n content = handle.read().replace(r\"\\$\\Uptheta\\$\", r\"$\\Uptheta$\")\n with open(texfile2, \"wt\") as handle:\n handle.write(content)\n\n # Run latex\n pdffile = basename + \"2.pdf\"\n cmd = [\n pdf_engine,\n texfile2,\n \"-halt-on-error\",\n f\"-output-directory={latex_dir}\",\n ]\n if verbose:\n print()\n print(\" \".join(shlex.quote(s) for s in cmd))\n output = subprocess.check_output(cmd, timeout=60) # nosec\n output = subprocess.check_output(cmd, timeout=60) # nosec\n\n # Workaround for non-working \"-output-directory\" latex option\n if not os.path.exists(pdffile):\n if os.path.exists(os.path.basename(pdffile)):\n pdffile = os.path.basename(pdffile)\n for ext in \"aux\", \"out\", \"toc\", \"log\":\n filename = os.path.splitext(pdffile)[0] + \".\" + ext\n if os.path.exists(filename):\n os.remove(filename)\n else:\n print()\n print(output)\n print()\n raise RuntimeError(\"latex did not produce pdf file: \" + pdffile)\n\n # Copy pdffile\n if not os.path.exists(outfile) or not os.path.samefile(pdffile, outfile):\n if verbose:\n print()\n print(f\"move {pdffile} to {outfile}\")\n shutil.move(pdffile, outfile)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/","title":"ontodoc_rst","text":"A module for documenting ontologies.
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation","title":" ModuleDocumentation
","text":"Class for documentating a module in an ontology.
Parameters:
Name Type Description Defaultontology
Optional[Ontology]
Ontology to include in the generated documentation. All entities in this ontology will be included.
None
entities
Optional[Iterable[Entity]]
Explicit listing of entities (classes, properties, individuals, datatypes) to document. Normally not needed.
None
title
Optional[str]
Header title. Be default it is inferred from title of
None
iri_regex
Optional[str]
A regular expression that the IRI of documented entities should match.
None
Source code in ontopy/ontodoc_rst.py
class ModuleDocumentation:\n \"\"\"Class for documentating a module in an ontology.\n\n Arguments:\n ontology: Ontology to include in the generated documentation.\n All entities in this ontology will be included.\n entities: Explicit listing of entities (classes, properties,\n individuals, datatypes) to document. Normally not needed.\n title: Header title. Be default it is inferred from title of\n iri_regex: A regular expression that the IRI of documented entities\n should match.\n \"\"\"\n\n def __init__(\n self,\n ontology: \"Optional[Ontology]\" = None,\n entities: \"Optional[Iterable[Entity]]\" = None,\n title: \"Optional[str]\" = None,\n iri_regex: \"Optional[str]\" = None,\n ) -> None:\n self.ontology = ontology\n self.title = title\n self.iri_regex = iri_regex\n self.graph = (\n ontology.world.as_rdflib_graph() if ontology else rdflib.Graph()\n )\n self.classes = set()\n self.object_properties = set()\n self.data_properties = set()\n self.annotation_properties = set()\n self.individuals = set()\n self.datatypes = set()\n\n if ontology:\n self.add_ontology(ontology)\n\n if entities:\n for entity in entities:\n self.add_entity(entity)\n\n def nonempty(self) -> bool:\n \"\"\"Returns whether the module has any classes, properties, individuals\n or datatypes.\"\"\"\n return (\n self.classes\n or self.object_properties\n or self.data_properties\n or self.annotation_properties\n or self.individuals\n or self.datatypes\n )\n\n def add_entity(self, entity: \"Entity\") -> None:\n \"\"\"Add `entity` (class, property, individual, datatype) to list of\n entities to document.\n \"\"\"\n if self.iri_regex and not re.match(self.iri_regex, entity.iri):\n return\n\n if isinstance(entity, owlready2.ThingClass):\n self.classes.add(entity)\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.DataPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.Thing):\n if (\n hasattr(entity.__class__, \"iri\")\n and entity.__class__.iri\n == \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n ):\n self.datatypes.add(entity)\n else:\n self.individuals.add(entity)\n\n def add_ontology(\n self, ontology: \"Ontology\", imported: bool = False\n ) -> None:\n \"\"\"Add ontology to documentation.\"\"\"\n for entity in ontology.get_entities(imported=imported):\n self.add_entity(entity)\n\n def get_title(self) -> str:\n \"\"\"Return a module title.\"\"\"\n iri = self.ontology.base_iri.rstrip(\"#/\")\n if self.title:\n title = self.title\n elif self.ontology:\n title = self.graph.value(URIRef(iri), DCTERMS.title)\n if not title:\n title = iri.rsplit(\"/\", 1)[-1]\n return title\n\n def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n heading = f\"Module: {self.get_title()}\"\n return f\"\"\"\n\n{heading.title()}\n{'='*len(heading)}\n\n\"\"\"\n\n def get_refdoc(\n self,\n subsections: str = \"all\",\n header: bool = True,\n ) -> str:\n # pylint: disable=too-many-branches,too-many-locals\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n subsections: Comma-separated list of subsections to include in\n the returned documentation. Valid subsection names are:\n - classes\n - object_properties\n - data_properties\n - annotation_properties\n - individuals\n - datatypes\n If \"all\", all subsections will be documented.\n header: Whether to also include the header in the returned\n documentation.\n\n Returns:\n String with reference documentation.\n \"\"\"\n # pylint: disable=too-many-nested-blocks\n if subsections == \"all\":\n subsections = (\n \"classes,object_properties,data_properties,\"\n \"annotation_properties,individuals,datatypes\"\n )\n\n maps = {\n \"classes\": self.classes,\n \"object_properties\": self.object_properties,\n \"data_properties\": self.data_properties,\n \"annotation_properties\": self.annotation_properties,\n \"individuals\": self.individuals,\n \"datatypes\": self.datatypes,\n }\n lines = []\n\n if header:\n lines.append(self.get_header())\n\n def add_header(name):\n clsname = f\"element-table-{name.lower().replace(' ', '-')}\"\n lines.extend(\n [\n \" <tr>\",\n f' <th class=\"{clsname}\" colspan=\"2\">{name}</th>',\n \" </tr>\",\n ]\n )\n\n def add_keyvalue(key, value, escape=True, htmllink=True):\n \"\"\"Help function for adding a key-value row to table.\"\"\"\n if escape:\n value = html.escape(str(value))\n if htmllink:\n value = re.sub(\n r\"(https?://[^\\s]+)\", r'<a href=\"\\1\">\\1</a>', value\n )\n value = value.replace(\"\\n\", \"<br>\")\n lines.extend(\n [\n \" <tr>\",\n ' <td class=\"element-table-key\">'\n f'<span class=\"element-table-key\">'\n f\"{key.title()}</span></td>\",\n f' <td class=\"element-table-value\">{value}</td>',\n \" </tr>\",\n ]\n )\n\n for subsection in subsections.split(\",\"):\n if maps[subsection]:\n moduletitle = self.get_title().lower().replace(\" \", \"-\")\n anchor = f\"{moduletitle}-{subsection.replace('_', '-')}\"\n lines.extend(\n [\n \"\",\n f\".. _{anchor}:\",\n \"\",\n subsection.replace(\"_\", \" \").title(),\n \"-\" * len(subsection),\n \"\",\n ]\n )\n for entity in sorted(maps[subsection], key=get_label):\n label = get_label(entity)\n lines.extend(\n [\n \".. raw:: html\",\n \"\",\n f' <div id=\"{entity.name}\"></div>',\n \"\",\n f\"{label}\",\n \"^\" * len(label),\n \"\",\n \".. raw:: html\",\n \"\",\n ' <table class=\"element-table\">',\n ]\n )\n add_keyvalue(\"IRI\", entity.iri)\n if hasattr(entity, \"get_annotations\"):\n add_header(\"Annotations\")\n for key, value in entity.get_annotations().items():\n if isinstance(value, list):\n for val in value:\n add_keyvalue(key, val)\n else:\n add_keyvalue(key, value)\n if entity.is_a or entity.equivalent_to:\n add_header(\"Formal description\")\n for r in entity.equivalent_to:\n\n # FIXME: Skip restrictions with value None to work\n # around bug in Owlready2 that doesn't handle custom\n # datatypes in restrictions correctly...\n if hasattr(r, \"value\") and r.value is None:\n continue\n\n add_keyvalue(\n \"Equivalent To\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n for r in entity.is_a:\n add_keyvalue(\n \"Subclass Of\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n\n lines.extend([\" </table>\", \"\"])\n\n return \"\\n\".join(lines)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.add_entity","title":"add_entity(self, entity)
","text":"Add entity
(class, property, individual, datatype) to list of entities to document.
ontopy/ontodoc_rst.py
def add_entity(self, entity: \"Entity\") -> None:\n \"\"\"Add `entity` (class, property, individual, datatype) to list of\n entities to document.\n \"\"\"\n if self.iri_regex and not re.match(self.iri_regex, entity.iri):\n return\n\n if isinstance(entity, owlready2.ThingClass):\n self.classes.add(entity)\n elif isinstance(entity, owlready2.ObjectPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.DataPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.AnnotationPropertyClass):\n self.object_properties.add(entity)\n elif isinstance(entity, owlready2.Thing):\n if (\n hasattr(entity.__class__, \"iri\")\n and entity.__class__.iri\n == \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n ):\n self.datatypes.add(entity)\n else:\n self.individuals.add(entity)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.add_ontology","title":"add_ontology(self, ontology, imported=False)
","text":"Add ontology to documentation.
Source code inontopy/ontodoc_rst.py
def add_ontology(\n self, ontology: \"Ontology\", imported: bool = False\n) -> None:\n \"\"\"Add ontology to documentation.\"\"\"\n for entity in ontology.get_entities(imported=imported):\n self.add_entity(entity)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.get_header","title":"get_header(self)
","text":"Return a the reStructuredText header as a string.
Source code inontopy/ontodoc_rst.py
def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n heading = f\"Module: {self.get_title()}\"\n return f\"\"\"\n\n{heading.title()}\n{'='*len(heading)}\n\n\"\"\"\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.get_refdoc","title":"get_refdoc(self, subsections='all', header=True)
","text":"Return reference documentation of all module entities.
Parameters:
Name Type Description Defaultsubsections
str
Comma-separated list of subsections to include in the returned documentation. Valid subsection names are: - classes - object_properties - data_properties - annotation_properties - individuals - datatypes If \"all\", all subsections will be documented.
'all'
header
bool
Whether to also include the header in the returned documentation.
True
Returns:
Type Descriptionstr
String with reference documentation.
Source code inontopy/ontodoc_rst.py
def get_refdoc(\n self,\n subsections: str = \"all\",\n header: bool = True,\n) -> str:\n # pylint: disable=too-many-branches,too-many-locals\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n subsections: Comma-separated list of subsections to include in\n the returned documentation. Valid subsection names are:\n - classes\n - object_properties\n - data_properties\n - annotation_properties\n - individuals\n - datatypes\n If \"all\", all subsections will be documented.\n header: Whether to also include the header in the returned\n documentation.\n\n Returns:\n String with reference documentation.\n \"\"\"\n # pylint: disable=too-many-nested-blocks\n if subsections == \"all\":\n subsections = (\n \"classes,object_properties,data_properties,\"\n \"annotation_properties,individuals,datatypes\"\n )\n\n maps = {\n \"classes\": self.classes,\n \"object_properties\": self.object_properties,\n \"data_properties\": self.data_properties,\n \"annotation_properties\": self.annotation_properties,\n \"individuals\": self.individuals,\n \"datatypes\": self.datatypes,\n }\n lines = []\n\n if header:\n lines.append(self.get_header())\n\n def add_header(name):\n clsname = f\"element-table-{name.lower().replace(' ', '-')}\"\n lines.extend(\n [\n \" <tr>\",\n f' <th class=\"{clsname}\" colspan=\"2\">{name}</th>',\n \" </tr>\",\n ]\n )\n\n def add_keyvalue(key, value, escape=True, htmllink=True):\n \"\"\"Help function for adding a key-value row to table.\"\"\"\n if escape:\n value = html.escape(str(value))\n if htmllink:\n value = re.sub(\n r\"(https?://[^\\s]+)\", r'<a href=\"\\1\">\\1</a>', value\n )\n value = value.replace(\"\\n\", \"<br>\")\n lines.extend(\n [\n \" <tr>\",\n ' <td class=\"element-table-key\">'\n f'<span class=\"element-table-key\">'\n f\"{key.title()}</span></td>\",\n f' <td class=\"element-table-value\">{value}</td>',\n \" </tr>\",\n ]\n )\n\n for subsection in subsections.split(\",\"):\n if maps[subsection]:\n moduletitle = self.get_title().lower().replace(\" \", \"-\")\n anchor = f\"{moduletitle}-{subsection.replace('_', '-')}\"\n lines.extend(\n [\n \"\",\n f\".. _{anchor}:\",\n \"\",\n subsection.replace(\"_\", \" \").title(),\n \"-\" * len(subsection),\n \"\",\n ]\n )\n for entity in sorted(maps[subsection], key=get_label):\n label = get_label(entity)\n lines.extend(\n [\n \".. raw:: html\",\n \"\",\n f' <div id=\"{entity.name}\"></div>',\n \"\",\n f\"{label}\",\n \"^\" * len(label),\n \"\",\n \".. raw:: html\",\n \"\",\n ' <table class=\"element-table\">',\n ]\n )\n add_keyvalue(\"IRI\", entity.iri)\n if hasattr(entity, \"get_annotations\"):\n add_header(\"Annotations\")\n for key, value in entity.get_annotations().items():\n if isinstance(value, list):\n for val in value:\n add_keyvalue(key, val)\n else:\n add_keyvalue(key, value)\n if entity.is_a or entity.equivalent_to:\n add_header(\"Formal description\")\n for r in entity.equivalent_to:\n\n # FIXME: Skip restrictions with value None to work\n # around bug in Owlready2 that doesn't handle custom\n # datatypes in restrictions correctly...\n if hasattr(r, \"value\") and r.value is None:\n continue\n\n add_keyvalue(\n \"Equivalent To\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n for r in entity.is_a:\n add_keyvalue(\n \"Subclass Of\",\n asstring(\n r,\n link='<a href=\"{iri}\">{label}</a>',\n ontology=self.ontology,\n ),\n escape=False,\n htmllink=False,\n )\n\n lines.extend([\" </table>\", \"\"])\n\n return \"\\n\".join(lines)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.get_title","title":"get_title(self)
","text":"Return a module title.
Source code inontopy/ontodoc_rst.py
def get_title(self) -> str:\n \"\"\"Return a module title.\"\"\"\n iri = self.ontology.base_iri.rstrip(\"#/\")\n if self.title:\n title = self.title\n elif self.ontology:\n title = self.graph.value(URIRef(iri), DCTERMS.title)\n if not title:\n title = iri.rsplit(\"/\", 1)[-1]\n return title\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.ModuleDocumentation.nonempty","title":"nonempty(self)
","text":"Returns whether the module has any classes, properties, individuals or datatypes.
Source code inontopy/ontodoc_rst.py
def nonempty(self) -> bool:\n \"\"\"Returns whether the module has any classes, properties, individuals\n or datatypes.\"\"\"\n return (\n self.classes\n or self.object_properties\n or self.data_properties\n or self.annotation_properties\n or self.individuals\n or self.datatypes\n )\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation","title":" OntologyDocumentation
","text":"Documentation for an ontology with a common namespace.
Parameters:
Name Type Description Defaultontologies
Iterable[Ontology]
Ontologies to include in the generated documentation. All entities in these ontologies will be included.
requiredimported
bool
Whether to include imported ontologies.
True
recursive
bool
Whether to recursively import all imported ontologies. Implies recursive=True
.
False
iri_regex
Optional[str]
A regular expression that the IRI of documented entities should match.
None
Source code in ontopy/ontodoc_rst.py
class OntologyDocumentation:\n \"\"\"Documentation for an ontology with a common namespace.\n\n Arguments:\n ontologies: Ontologies to include in the generated documentation.\n All entities in these ontologies will be included.\n imported: Whether to include imported ontologies.\n recursive: Whether to recursively import all imported ontologies.\n Implies `recursive=True`.\n iri_regex: A regular expression that the IRI of documented entities\n should match.\n \"\"\"\n\n def __init__(\n self,\n ontologies: \"Iterable[Ontology]\",\n imported: bool = True,\n recursive: bool = False,\n iri_regex: \"Optional[str]\" = None,\n ) -> None:\n if isinstance(ontologies, (Ontology, str, Path)):\n ontologies = [ontologies]\n\n if recursive:\n imported = True\n\n self.iri_regex = iri_regex\n self.module_documentations = []\n\n # Explicitly included ontologies\n included_ontologies = {}\n for onto in ontologies:\n if isinstance(onto, (str, Path)):\n onto = get_ontology(onto).load()\n elif not isinstance(onto, Ontology):\n raise TypeError(\n \"expected ontology as an IRI, Path or Ontology object, \"\n f\"got: {onto}\"\n )\n if onto.base_iri not in included_ontologies:\n included_ontologies[onto.base_iri] = onto\n\n # Indirectly included ontologies (imported)\n if imported:\n for onto in list(included_ontologies.values()):\n for o in onto.get_imported_ontologies(recursive=recursive):\n if o.base_iri not in included_ontologies:\n included_ontologies[o.base_iri] = o\n\n # Module documentations\n for onto in included_ontologies.values():\n self.module_documentations.append(\n ModuleDocumentation(onto, iri_regex=iri_regex)\n )\n\n def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n return \"\"\"\n==========\nReferences\n==========\n\"\"\"\n\n def get_refdoc(self, header: bool = True, subsections: str = \"all\") -> str:\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n header: Whether to also include the header in the returned\n documentation.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n\n Returns:\n String with reference documentation.\n \"\"\"\n moduledocs = []\n if header:\n moduledocs.append(self.get_header())\n moduledocs.extend(\n md.get_refdoc(subsections=subsections)\n for md in self.module_documentations\n if md.nonempty()\n )\n return \"\\n\".join(moduledocs)\n\n def top_ontology(self) -> Ontology:\n \"\"\"Return the top-level ontology.\"\"\"\n return self.module_documentations[0].ontology\n\n def write_refdoc(self, docfile=None, subsections=\"all\"):\n \"\"\"Write reference documentation to disk.\n\n Arguments:\n docfile: Name of file to write to. Defaults to the name of\n the top ontology with extension `.rst`.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n \"\"\"\n if not docfile:\n docfile = self.top_ontology().name + \".rst\"\n Path(docfile).write_text(\n self.get_refdoc(subsections=subsections), encoding=\"utf8\"\n )\n\n def write_index_template(\n self, indexfile=\"index.rst\", docfile=None, overwrite=False\n ):\n \"\"\"Write a basic template index.rst file to disk.\n\n Arguments:\n indexfile: Name of index file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n docname = Path(docfile).stem if docfile else self.top_ontology().name\n content = f\"\"\"\n.. toctree::\n :includehidden:\n :hidden:\n\n Reference Index <{docname}>\n\n\"\"\"\n outpath = Path(indexfile)\n if not overwrite and outpath.exists():\n warnings.warn(f\"index.rst file already exists: {outpath}\")\n return\n\n outpath.write_text(content, encoding=\"utf8\")\n\n def write_conf_template(\n self, conffile=\"conf.py\", docfile=None, overwrite=False\n ):\n \"\"\"Write basic template sphinx conf.py file to disk.\n\n Arguments:\n conffile: Name of configuration file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n # pylint: disable=redefined-builtin\n md = self.module_documentations[0]\n\n iri = md.ontology.base_iri.rstrip(\"#/\")\n authors = sorted(md.graph.objects(URIRef(iri), DCTERMS.creator))\n license = md.graph.value(URIRef(iri), DCTERMS.license, default=None)\n release = md.graph.value(URIRef(iri), OWL.versionInfo, default=\"1.0\")\n\n author = \", \".join(a.value for a in authors) if authors else \"<AUTHOR>\"\n copyright = license if license else f\"{time.strftime('%Y')}, {author}\"\n\n content = f\"\"\"\n# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Project information -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\n\nproject = '{md.ontology.name}'\ncopyright = '{copyright}'\nauthor = '{author}'\nrelease = '{release}'\n\n# -- General configuration ---------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n\nextensions = []\n\ntemplates_path = ['_templates']\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n\n\n# -- Options for HTML output -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n\nhtml_theme = 'alabaster'\nhtml_static_path = ['_static']\n\"\"\"\n if not conffile:\n conffile = Path(docfile).with_name(\"conf.py\")\n if overwrite and conffile.exists():\n warnings.warn(f\"conf.py file already exists: {conffile}\")\n return\n\n conffile.write_text(content, encoding=\"utf8\")\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.get_header","title":"get_header(self)
","text":"Return a the reStructuredText header as a string.
Source code inontopy/ontodoc_rst.py
def get_header(self) -> str:\n \"\"\"Return a the reStructuredText header as a string.\"\"\"\n return \"\"\"\n==========\nReferences\n==========\n\"\"\"\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.get_refdoc","title":"get_refdoc(self, header=True, subsections='all')
","text":"Return reference documentation of all module entities.
Parameters:
Name Type Description Defaultheader
bool
Whether to also include the header in the returned documentation.
True
subsections
str
Comma-separated list of subsections to include in the returned documentation. See ModuleDocumentation.get_refdoc() for more info.
'all'
Returns:
Type Descriptionstr
String with reference documentation.
Source code inontopy/ontodoc_rst.py
def get_refdoc(self, header: bool = True, subsections: str = \"all\") -> str:\n \"\"\"Return reference documentation of all module entities.\n\n Arguments:\n header: Whether to also include the header in the returned\n documentation.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n\n Returns:\n String with reference documentation.\n \"\"\"\n moduledocs = []\n if header:\n moduledocs.append(self.get_header())\n moduledocs.extend(\n md.get_refdoc(subsections=subsections)\n for md in self.module_documentations\n if md.nonempty()\n )\n return \"\\n\".join(moduledocs)\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.top_ontology","title":"top_ontology(self)
","text":"Return the top-level ontology.
Source code inontopy/ontodoc_rst.py
def top_ontology(self) -> Ontology:\n \"\"\"Return the top-level ontology.\"\"\"\n return self.module_documentations[0].ontology\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.write_conf_template","title":"write_conf_template(self, conffile='conf.py', docfile=None, overwrite=False)
","text":"Write basic template sphinx conf.py file to disk.
Parameters:
Name Type Description Defaultconffile
Name of configuration file to write.
'conf.py'
docfile
Name of generated documentation file. If not given, the name of the top ontology will be used.
None
overwrite
Whether to overwrite an existing file.
False
Source code in ontopy/ontodoc_rst.py
def write_conf_template(\n self, conffile=\"conf.py\", docfile=None, overwrite=False\n ):\n \"\"\"Write basic template sphinx conf.py file to disk.\n\n Arguments:\n conffile: Name of configuration file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n # pylint: disable=redefined-builtin\n md = self.module_documentations[0]\n\n iri = md.ontology.base_iri.rstrip(\"#/\")\n authors = sorted(md.graph.objects(URIRef(iri), DCTERMS.creator))\n license = md.graph.value(URIRef(iri), DCTERMS.license, default=None)\n release = md.graph.value(URIRef(iri), OWL.versionInfo, default=\"1.0\")\n\n author = \", \".join(a.value for a in authors) if authors else \"<AUTHOR>\"\n copyright = license if license else f\"{time.strftime('%Y')}, {author}\"\n\n content = f\"\"\"\n# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Project information -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\n\nproject = '{md.ontology.name}'\ncopyright = '{copyright}'\nauthor = '{author}'\nrelease = '{release}'\n\n# -- General configuration ---------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n\nextensions = []\n\ntemplates_path = ['_templates']\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n\n\n# -- Options for HTML output -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n\nhtml_theme = 'alabaster'\nhtml_static_path = ['_static']\n\"\"\"\n if not conffile:\n conffile = Path(docfile).with_name(\"conf.py\")\n if overwrite and conffile.exists():\n warnings.warn(f\"conf.py file already exists: {conffile}\")\n return\n\n conffile.write_text(content, encoding=\"utf8\")\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.write_index_template","title":"write_index_template(self, indexfile='index.rst', docfile=None, overwrite=False)
","text":"Write a basic template index.rst file to disk.
Parameters:
Name Type Description Defaultindexfile
Name of index file to write.
'index.rst'
docfile
Name of generated documentation file. If not given, the name of the top ontology will be used.
None
overwrite
Whether to overwrite an existing file.
False
Source code in ontopy/ontodoc_rst.py
def write_index_template(\n self, indexfile=\"index.rst\", docfile=None, overwrite=False\n ):\n \"\"\"Write a basic template index.rst file to disk.\n\n Arguments:\n indexfile: Name of index file to write.\n docfile: Name of generated documentation file. If not given,\n the name of the top ontology will be used.\n overwrite: Whether to overwrite an existing file.\n \"\"\"\n docname = Path(docfile).stem if docfile else self.top_ontology().name\n content = f\"\"\"\n.. toctree::\n :includehidden:\n :hidden:\n\n Reference Index <{docname}>\n\n\"\"\"\n outpath = Path(indexfile)\n if not overwrite and outpath.exists():\n warnings.warn(f\"index.rst file already exists: {outpath}\")\n return\n\n outpath.write_text(content, encoding=\"utf8\")\n
"},{"location":"api_reference/ontopy/ontodoc_rst/#ontopy.ontodoc_rst.OntologyDocumentation.write_refdoc","title":"write_refdoc(self, docfile=None, subsections='all')
","text":"Write reference documentation to disk.
Parameters:
Name Type Description Defaultdocfile
Name of file to write to. Defaults to the name of the top ontology with extension .rst
.
None
subsections
Comma-separated list of subsections to include in the returned documentation. See ModuleDocumentation.get_refdoc() for more info.
'all'
Source code in ontopy/ontodoc_rst.py
def write_refdoc(self, docfile=None, subsections=\"all\"):\n \"\"\"Write reference documentation to disk.\n\n Arguments:\n docfile: Name of file to write to. Defaults to the name of\n the top ontology with extension `.rst`.\n subsections: Comma-separated list of subsections to include in\n the returned documentation. See ModuleDocumentation.get_refdoc()\n for more info.\n \"\"\"\n if not docfile:\n docfile = self.top_ontology().name + \".rst\"\n Path(docfile).write_text(\n self.get_refdoc(subsections=subsections), encoding=\"utf8\"\n )\n
"},{"location":"api_reference/ontopy/ontology/","title":"ontology","text":"A module adding additional functionality to owlready2.
If desirable some of these additions may be moved back into owlready2.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.BlankNode","title":" BlankNode
","text":"Represents a blank node.
A blank node is a node that is not a literal and has no IRI. Resources represented by blank nodes are also called anonumous resources. Only the subject or object in an RDF triple can be a blank node.
Source code inontopy/ontology.py
class BlankNode:\n \"\"\"Represents a blank node.\n\n A blank node is a node that is not a literal and has no IRI.\n Resources represented by blank nodes are also called anonumous resources.\n Only the subject or object in an RDF triple can be a blank node.\n \"\"\"\n\n def __init__(self, onto: Union[World, Ontology], storid: int):\n \"\"\"Initiate a blank node.\n\n Args:\n onto: Ontology or World instance.\n storid: The storage id of the blank node.\n \"\"\"\n if storid >= 0:\n raise ValueError(\n f\"A BlankNode is supposed to have a negative storid: {storid}\"\n )\n self.onto = onto\n self.storid = storid\n\n def __repr__(self):\n return repr(f\"_:b{-self.storid}\")\n\n def __hash__(self):\n return hash((self.onto, self.storid))\n\n def __eq__(self, other):\n \"\"\"For now blank nodes always compare true against each other.\"\"\"\n return isinstance(other, BlankNode)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.BlankNode.__init__","title":"__init__(self, onto, storid)
special
","text":"Initiate a blank node.
Parameters:
Name Type Description Defaultonto
Union[ontopy.ontology.World, ontopy.ontology.Ontology]
Ontology or World instance.
requiredstorid
int
The storage id of the blank node.
required Source code inontopy/ontology.py
def __init__(self, onto: Union[World, Ontology], storid: int):\n \"\"\"Initiate a blank node.\n\n Args:\n onto: Ontology or World instance.\n storid: The storage id of the blank node.\n \"\"\"\n if storid >= 0:\n raise ValueError(\n f\"A BlankNode is supposed to have a negative storid: {storid}\"\n )\n self.onto = onto\n self.storid = storid\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology","title":" Ontology (Ontology)
","text":"A generic class extending owlready2.Ontology.
Additional attributes: !!! iri \"IRI of this ontology. Currently only used for serialisation\" with rdflib. Defaults to None, meaning base_iri
will be used instead. !!! label_annotations \"List of label annotations, i.e. annotations\" that are recognised by the get_by_label() method. Defaults to [skos:prefLabel, rdf:label, skos:altLabel]
. prefix: Prefix for this ontology. Defaults to None.
ontopy/ontology.py
class Ontology(owlready2.Ontology): # pylint: disable=too-many-public-methods\n \"\"\"A generic class extending owlready2.Ontology.\n\n Additional attributes:\n iri: IRI of this ontology. Currently only used for serialisation\n with rdflib. Defaults to None, meaning `base_iri` will be used\n instead.\n label_annotations: List of label annotations, i.e. annotations\n that are recognised by the get_by_label() method. Defaults\n to `[skos:prefLabel, rdf:label, skos:altLabel]`.\n prefix: Prefix for this ontology. Defaults to None.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.iri = None\n self.label_annotations = DEFAULT_LABEL_ANNOTATIONS[:]\n self.prefix = None\n\n # Name of special unlabeled entities, like Thing, Nothing, etc...\n _special_labels = None\n\n # Some properties for customising dir() listing - useful in\n # interactive sessions...\n _dir_preflabel = isinteractive()\n _dir_label = isinteractive()\n _dir_name = False\n _dir_imported = isinteractive()\n dir_preflabel = property(\n fget=lambda self: self._dir_preflabel,\n fset=lambda self, v: setattr(self, \"_dir_preflabel\", bool(v)),\n doc=\"Whether to include entity prefLabel in dir() listing.\",\n )\n dir_label = property(\n fget=lambda self: self._dir_label,\n fset=lambda self, v: setattr(self, \"_dir_label\", bool(v)),\n doc=\"Whether to include entity label in dir() listing.\",\n )\n dir_name = property(\n fget=lambda self: self._dir_name,\n fset=lambda self, v: setattr(self, \"_dir_name\", bool(v)),\n doc=\"Whether to include entity name in dir() listing.\",\n )\n dir_imported = property(\n fget=lambda self: self._dir_imported,\n fset=lambda self, v: setattr(self, \"_dir_imported\", bool(v)),\n doc=\"Whether to include imported ontologies in dir() listing.\",\n )\n\n # Other settings\n _colon_in_label = False\n colon_in_label = property(\n fget=lambda self: self._colon_in_label,\n fset=lambda self, v: setattr(self, \"_colon_in_label\", bool(v)),\n doc=\"Whether to accept colon in name-part of IRI. \"\n \"If true, the name cannot be prefixed.\",\n )\n\n def __dir__(self):\n dirset = set(super().__dir__())\n lst = list(self.get_entities(imported=self._dir_imported))\n if self._dir_preflabel:\n dirset.update(\n str(dir.prefLabel.first())\n for dir in lst\n if hasattr(dir, \"prefLabel\")\n )\n if self._dir_label:\n dirset.update(\n str(dir.label.first()) for dir in lst if hasattr(dir, \"label\")\n )\n if self._dir_name:\n dirset.update(dir.name for dir in lst if hasattr(dir, \"name\"))\n dirset.difference_update({None}) # get rid of possible None\n return sorted(dirset)\n\n def __getitem__(self, name):\n item = super().__getitem__(name)\n if not item:\n item = self.get_by_label(name)\n return item\n\n def __getattr__(self, name):\n attr = super().__getattr__(name)\n if not attr:\n attr = self.get_by_label(name)\n return attr\n\n def __contains__(self, other):\n if self.world[other]:\n return True\n try:\n self.get_by_label(other)\n except NoSuchLabelError:\n return False\n return True\n\n def __objclass__(self):\n # Play nice with inspect...\n pass\n\n def __hash__(self):\n \"\"\"Returns a hash based on base_iri.\n This is done to keep Ontology hashable when defining __eq__.\n \"\"\"\n return hash(self.base_iri)\n\n def __eq__(self, other):\n \"\"\"Checks if this ontology is equal to `other`.\n\n This function compares the result of\n ``set(self.get_unabbreviated_triples(label='_:b'))``,\n i.e. blank nodes are not distinguished, but relations to blank\n nodes are included.\n \"\"\"\n return set(self.get_unabbreviated_triples(blank=\"_:b\")) == set(\n other.get_unabbreviated_triples(blank=\"_:b\")\n )\n\n def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n ):\n \"\"\"Returns all matching triples unabbreviated.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n # pylint: disable=invalid-name\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n\n def set_default_label_annotations(self):\n \"\"\"Sets the default label annotations.\"\"\"\n warnings.warn(\n \"Ontology.set_default_label_annotations() is deprecated. \"\n \"Default label annotations are set by Ontology.__init__(). \",\n DeprecationWarning,\n stacklevel=2,\n )\n self.label_annotations = DEFAULT_LABEL_ANNOTATIONS[:]\n\n def get_by_label(\n self,\n label: str,\n label_annotations: str = None,\n prefix: str = None,\n imported: bool = True,\n colon_in_label: bool = None,\n ):\n \"\"\"Returns entity with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'.\n get_by_label('prefix:label') ==\n get_by_label('label', prefix='prefix').\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n imported: Whether to also look for `label` in imported ontologies.\n colon_in_label: Whether to accept colon (:) in a label or name-part\n of IRI. Defaults to the `colon_in_label` property of `self`.\n Setting this true cannot be combined with `prefix`.\n\n If several entities have the same label, only the one which is\n found first is returned.Use get_by_label_all() to get all matches.\n\n Note, if different prefixes are provided in the label and via\n the `prefix` argument a warning will be issued and the\n `prefix` argument will take precedence.\n\n A NoSuchLabelError is raised if `label` cannot be found.\n \"\"\"\n # pylint: disable=too-many-arguments,too-many-branches,invalid-name\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, must be a string: '{label}'\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n if colon_in_label is None:\n colon_in_label = self._colon_in_label\n if colon_in_label:\n if prefix:\n raise ValueError(\n \"`prefix` cannot be combined with `colon_in_label`\"\n )\n else:\n splitlabel = label.split(\":\", 1)\n if len(splitlabel) == 2 and not splitlabel[1].startswith(\"//\"):\n label = splitlabel[1]\n if prefix and prefix != splitlabel[0]:\n warnings.warn(\n f\"Prefix given both as argument ({prefix}) \"\n f\"and in label ({splitlabel[0]}). \"\n \"Prefix given in argument takes precedence. \"\n )\n if not prefix:\n prefix = splitlabel[0]\n\n if prefix:\n entityset = self.get_by_label_all(\n label,\n label_annotations=label_annotations,\n prefix=prefix,\n )\n if len(entityset) == 1:\n return entityset.pop()\n if len(entityset) > 1:\n raise AmbiguousLabelError(\n f\"Several entities have the same label '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n raise NoSuchLabelError(\n f\"No label annotations matches for '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n\n # Label is a full IRI\n entity = self.world[label]\n if entity:\n return entity\n\n get_triples = (\n self.world._get_data_triples_spod_spod\n if imported\n else self._get_data_triples_spod_spod\n )\n\n for storid in self._to_storids(label_annotations):\n for s, _, _, _ in get_triples(None, storid, label, None):\n return self.world[self._unabbreviate(s)]\n\n # Special labels\n if self._special_labels and label in self._special_labels:\n return self._special_labels[label]\n\n # Check if label is a name under base_iri\n entity = self.world[self.base_iri + label]\n if entity:\n return entity\n\n # Check label is the name of an entity\n for entity in self.get_entities(imported=imported):\n if label == entity.name:\n return entity\n\n raise NoSuchLabelError(f\"No label annotations matches '{label}'\")\n\n def get_by_label_all(\n self,\n label,\n label_annotations=None,\n prefix=None,\n exact_match=False,\n ) -> \"Set[Optional[owlready2.entity.EntityClass]]\":\n \"\"\"Returns set of entities with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'. Wildcard matching\n using glob pattern is also supported if `exact_match` is set to\n false.\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n exact_match: Do not treat \"*\" and brackets as special characters\n when matching. May be useful if your ontology has labels\n containing such labels.\n\n Returns:\n Set of all matching entities or an empty set if no matches\n could be found.\n \"\"\"\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, \" f\"must be a string: {label!r}\"\n )\n if \" \" in label:\n raise ValueError(\n f\"Invalid label definition, {label!r} contains spaces.\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n entities = set()\n\n # Check label annotations\n if exact_match:\n for storid in self._to_storids(label_annotations):\n entities.update(\n self.world._get_by_storid(s)\n for s, _, _ in self.world._get_data_triples_spod_spod(\n None, storid, str(label), None\n )\n )\n else:\n for storid in self._to_storids(label_annotations):\n label_entity = self._unabbreviate(storid)\n key = (\n label_entity.name\n if hasattr(label_entity, \"name\")\n else label_entity\n )\n entities.update(self.world.search(**{key: label}))\n\n if self._special_labels and label in self._special_labels:\n entities.update(self._special_labels[label])\n\n # Check name-part of IRI\n if exact_match:\n entities.update(\n ent for ent in self.get_entities() if ent.name == str(label)\n )\n else:\n matches = fnmatch.filter(\n (ent.name for ent in self.get_entities()), label\n )\n entities.update(\n ent for ent in self.get_entities() if ent.name in matches\n )\n\n if prefix:\n return set(\n ent\n for ent in entities\n if ent.namespace.ontology.prefix == prefix\n )\n return entities\n\n def _to_storids(self, sequence, create_if_missing=False):\n \"\"\"Return a list of storid's corresponding to the elements in the\n sequence `sequence`.\n\n The elements may be either be full IRIs (strings) or Owlready2\n entities with an associated storid.\n\n If `create_if_missing` is true, new Owlready2 entities will be\n created for IRIs that not already are associated with an\n entity. Otherwise such IRIs will be skipped in the returned\n list.\n \"\"\"\n if not sequence:\n return []\n storids = []\n for element in sequence:\n if hasattr(element, \"storid\"):\n storids.append(element.storid)\n else:\n storid = self.world._abbreviate(element, create_if_missing)\n if storid:\n storids.append(storid)\n return storids\n\n def add_label_annotation(self, iri):\n \"\"\"Adds label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.add_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n if iri not in self.label_annotations:\n self.label_annotations.append(iri)\n\n def remove_label_annotation(self, iri):\n \"\"\"Removes label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.remove_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n try:\n self.label_annotations.remove(iri)\n except ValueError:\n pass\n\n def set_common_prefix(\n self,\n iri_base: str = \"http://emmo.info/emmo\",\n prefix: str = \"emmo\",\n visited: \"Optional[Set]\" = None,\n ) -> None:\n \"\"\"Set a common prefix for all imported ontologies\n with the same first part of the base_iri.\n\n Args:\n iri_base: The start of the base_iri to look for. Defaults to\n the emmo base_iri http://emmo.info/emmo\n prefix: the desired prefix. Defaults to emmo.\n visited: Ontologies to skip. Only intended for internal use.\n \"\"\"\n if visited is None:\n visited = set()\n if self.base_iri.startswith(iri_base):\n self.prefix = prefix\n for onto in self.imported_ontologies:\n if not onto in visited:\n visited.add(onto)\n onto.set_common_prefix(\n iri_base=iri_base, prefix=prefix, visited=visited\n )\n\n def load( # pylint: disable=too-many-arguments,arguments-renamed\n self,\n only_local=False,\n filename=None,\n format=None, # pylint: disable=redefined-builtin\n reload=None,\n reload_if_newer=False,\n url_from_catalog=None,\n catalog_file=\"catalog-v001.xml\",\n emmo_based=True,\n prefix=None,\n prefix_emmo=None,\n **kwargs,\n ):\n \"\"\"Load the ontology.\n\n Arguments\n ---------\n only_local: bool\n Whether to only read local files. This requires that you\n have appended the path to the ontology to owlready2.onto_path.\n filename: str\n Path to file to load the ontology from. Defaults to `base_iri`\n provided to get_ontology().\n format: str\n Format of `filename`. Default is inferred from `filename`\n extension.\n reload: bool\n Whether to reload the ontology if it is already loaded.\n reload_if_newer: bool\n Whether to reload the ontology if the source has changed since\n last time it was loaded.\n url_from_catalog: bool | None\n Whether to use catalog file to resolve the location of `base_iri`.\n If None, the catalog file is used if it exists in the same\n directory as `filename`.\n catalog_file: str\n Name of Prot\u00e8g\u00e8 catalog file in the same folder as the\n ontology. This option is used together with `only_local` and\n defaults to \"catalog-v001.xml\".\n emmo_based: bool\n Whether this is an EMMO-based ontology or not, default `True`.\n prefix: defaults to self.get_namespace.name if\n prefix_emmo: bool, default None. If emmo_based is True it\n defaults to True and sets the prefix of all imported ontologies\n with base_iri starting with 'http://emmo.info/emmo' to emmo\n kwargs:\n Additional keyword arguments are passed on to\n owlready2.Ontology.load().\n \"\"\"\n # TODO: make sure that `only_local` argument is respected...\n\n if self.loaded:\n return self\n self._load(\n only_local=only_local,\n filename=filename,\n format=format,\n reload=reload,\n reload_if_newer=reload_if_newer,\n url_from_catalog=url_from_catalog,\n catalog_file=catalog_file,\n **kwargs,\n )\n\n # Enable optimised search by get_by_label()\n if self._special_labels is None and emmo_based:\n top = self.world[\"http://www.w3.org/2002/07/owl#topObjectProperty\"]\n self._special_labels = {\n \"Thing\": owlready2.Thing,\n \"Nothing\": owlready2.Nothing,\n \"topObjectProperty\": top,\n \"owl:Thing\": owlready2.Thing,\n \"owl:Nothing\": owlready2.Nothing,\n \"owl:topObjectProperty\": top,\n }\n # set prefix if another prefix is desired\n # if we do this, shouldn't we make the name of all\n # entities of the given ontology to the same?\n if prefix:\n self.prefix = prefix\n else:\n self.prefix = self.name\n\n if emmo_based and prefix_emmo is None:\n prefix_emmo = True\n if prefix_emmo:\n self.set_common_prefix()\n\n return self\n\n def _load( # pylint: disable=too-many-arguments,too-many-locals,too-many-branches,too-many-statements\n self,\n only_local=False,\n filename=None,\n format=None, # pylint: disable=redefined-builtin\n reload=None,\n reload_if_newer=False,\n url_from_catalog=None,\n catalog_file=\"catalog-v001.xml\",\n **kwargs,\n ):\n \"\"\"Help function for load().\"\"\"\n web_protocol = \"http://\", \"https://\", \"ftp://\"\n url = str(filename) if filename else self.base_iri.rstrip(\"/#\")\n if url.startswith(web_protocol):\n baseurl = os.path.dirname(url)\n catalogurl = baseurl + \"/\" + catalog_file\n else:\n if url.startswith(\"file://\"):\n url = url[7:]\n url = os.path.normpath(os.path.abspath(url))\n baseurl = os.path.dirname(url)\n catalogurl = os.path.join(baseurl, catalog_file)\n\n def getmtime(path):\n if os.path.exists(path):\n return os.path.getmtime(path)\n return 0.0\n\n # Resolve url from catalog file\n iris = {}\n dirs = set()\n if url_from_catalog or url_from_catalog is None:\n not_reload = not reload and (\n not reload_if_newer\n or getmtime(catalogurl)\n > self.world._cached_catalogs[catalogurl][0]\n )\n # get iris from catalog already in cached catalogs\n if catalogurl in self.world._cached_catalogs and not_reload:\n _, iris, dirs = self.world._cached_catalogs[catalogurl]\n # do not update cached_catalogs if url already in _iri_mappings\n # and reload not forced\n elif url in self.world._iri_mappings and not_reload:\n pass\n # update iris from current catalogurl\n else:\n try:\n iris, dirs = read_catalog(\n uri=catalogurl,\n recursive=False,\n return_paths=True,\n catalog_file=catalog_file,\n )\n except ReadCatalogError:\n if url_from_catalog is not None:\n raise\n self.world._cached_catalogs[catalogurl] = (0.0, {}, set())\n else:\n self.world._cached_catalogs[catalogurl] = (\n getmtime(catalogurl),\n iris,\n dirs,\n )\n self.world._iri_mappings.update(iris)\n resolved_url = self.world._iri_mappings.get(url, url)\n # Append paths from catalog file to onto_path\n for path in sorted(dirs, reverse=True):\n if path not in owlready2.onto_path:\n owlready2.onto_path.append(path)\n\n # Use catalog file to update IRIs of imported ontologies\n # in internal store and try to load again...\n if self.world._iri_mappings:\n for abbrev_iri in self.world._get_obj_triples_sp_o(\n self.storid, owlready2.owl_imports\n ):\n iri = self._unabbreviate(abbrev_iri)\n if iri in self.world._iri_mappings:\n self._del_obj_triple_spo(\n self.storid, owlready2.owl_imports, abbrev_iri\n )\n self._add_obj_triple_spo(\n self.storid,\n owlready2.owl_imports,\n self._abbreviate(self.world._iri_mappings[iri]),\n )\n\n # Load ontology\n try:\n self.loaded = False\n fmt = format if format else guess_format(resolved_url, fmap=FMAP)\n if fmt and fmt not in OWLREADY2_FORMATS:\n # Convert filename to rdfxml before passing it to owlready2\n graph = rdflib.Graph()\n try:\n graph.parse(resolved_url, format=fmt)\n except URLError as err:\n raise EMMOntoPyException(\n \"URL error\", err, resolved_url\n ) from err\n\n with tempfile.NamedTemporaryFile() as handle:\n graph.serialize(destination=handle, format=\"xml\")\n handle.seek(0)\n return super().load(\n only_local=True,\n fileobj=handle,\n reload=reload,\n reload_if_newer=reload_if_newer,\n format=\"rdfxml\",\n **kwargs,\n )\n elif resolved_url.startswith(web_protocol):\n return super().load(\n only_local=only_local,\n reload=reload,\n reload_if_newer=reload_if_newer,\n **kwargs,\n )\n\n else:\n with open(resolved_url, \"rb\") as handle:\n return super().load(\n only_local=only_local,\n fileobj=handle,\n reload=reload,\n reload_if_newer=reload_if_newer,\n **kwargs,\n )\n except owlready2.OwlReadyOntologyParsingError:\n # Owlready2 is not able to parse the ontology - most\n # likely because imported ontologies must be resolved\n # using the catalog file.\n\n # Reraise if we don't want to read from the catalog file\n if not url_from_catalog and url_from_catalog is not None:\n raise\n\n warnings.warn(\n \"Recovering from Owlready2 parsing error... might be deprecated\"\n )\n\n # Copy the ontology into a local folder and try again\n with tempfile.TemporaryDirectory() as handle:\n output = os.path.join(handle, os.path.basename(resolved_url))\n convert_imported(\n input_ontology=resolved_url,\n output_ontology=output,\n input_format=fmt,\n output_format=\"xml\",\n url_from_catalog=url_from_catalog,\n catalog_file=catalog_file,\n )\n\n self.loaded = False\n with open(output, \"rb\") as handle:\n try:\n return super().load(\n only_local=True,\n fileobj=handle,\n reload=reload,\n reload_if_newer=reload_if_newer,\n format=\"rdfxml\",\n **kwargs,\n )\n except HTTPError as exc: # Add url to HTTPError message\n raise HTTPError(\n url=exc.url,\n code=exc.code,\n msg=f\"{exc.url}: {exc.msg}\",\n hdrs=exc.hdrs,\n fp=exc.fp,\n ).with_traceback(exc.__traceback__)\n\n except HTTPError as exc: # Add url to HTTPError message\n raise HTTPError(\n url=exc.url,\n code=exc.code,\n msg=f\"{exc.url}: {exc.msg}\",\n hdrs=exc.hdrs,\n fp=exc.fp,\n ).with_traceback(exc.__traceback__)\n\n def save(\n self,\n filename=None,\n format=None,\n dir=\".\",\n mkdir=False,\n overwrite=False,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n append_catalog=False,\n catalog_file=\"catalog-v001.xml\",\n **kwargs,\n ) -> Path:\n \"\"\"Writes the ontology to file.\n\n Parameters\n ----------\n filename: None | str | Path\n Name of file to write to. If None, it defaults to the name\n of the ontology with `format` as file extension.\n format: str\n Output format. The default is to infer it from `filename`.\n dir: str | Path\n If `filename` is a relative path, it is a relative path to `dir`.\n mkdir: bool\n Whether to create output directory if it does not exists.\n owerwrite: bool\n If true and `filename` exists, remove the existing file before\n saving. The default is to append to an existing ontology.\n recursive: bool\n Whether to save imported ontologies recursively. This is\n commonly combined with `filename=None`, `dir` and `mkdir`.\n Note that depending on the structure of the ontology and\n all imports the ontology might end up in a subdirectory.\n If filename is given, the ontology is saved to the given\n directory.\n The path to the final location is returned.\n squash: bool\n If true, rdflib will be used to save the current ontology\n together with all its sub-ontologies into `filename`.\n It makes no sense to combine this with `recursive`.\n write_catalog_file: bool\n Whether to also write a catalog file to disk.\n append_catalog: bool\n Whether to append to an existing catalog file.\n catalog_file: str | Path\n Name of catalog file. If not an absolute path, it is prepended\n to `dir`.\n\n Returns\n --------\n The path to the saved ontology.\n \"\"\"\n # pylint: disable=redefined-builtin,too-many-arguments\n # pylint: disable=too-many-statements,too-many-branches\n # pylint: disable=too-many-locals,arguments-renamed,invalid-name\n\n if not _validate_installed_version(\n package=\"rdflib\", min_version=\"6.0.0\"\n ) and format == FMAP.get(\"ttl\", \"\"):\n from rdflib import ( # pylint: disable=import-outside-toplevel\n __version__ as __rdflib_version__,\n )\n\n warnings.warn(\n IncompatibleVersion(\n \"To correctly convert to Turtle format, rdflib must be \"\n \"version 6.0.0 or greater, however, the detected rdflib \"\n \"version used by your Python interpreter is \"\n f\"{__rdflib_version__!r}. For more information see the \"\n \"'Known issues' section of the README.\"\n )\n )\n revmap = {value: key for key, value in FMAP.items()}\n if filename is None:\n if format:\n fmt = revmap.get(format, format)\n file = f\"{self.name}.{fmt}\"\n else:\n raise TypeError(\"`filename` and `format` cannot both be None.\")\n else:\n file = filename\n filepath = os.path.join(\n dir, file if isinstance(file, (str, Path)) else file.name\n )\n returnpath = filepath\n\n dir = Path(filepath).resolve().parent\n\n if mkdir:\n outdir = Path(filepath).parent.resolve()\n if not outdir.exists():\n outdir.mkdir(parents=True)\n\n if not format:\n format = guess_format(file, fmap=FMAP)\n fmt = revmap.get(format, format)\n\n if overwrite and os.path.exists(filepath):\n os.remove(filepath)\n\n if recursive:\n if squash:\n raise ValueError(\n \"`recursive` and `squash` should not both be true\"\n )\n layout = directory_layout(self)\n if filename:\n layout[self] = file.rstrip(f\".{fmt}\")\n # Update path to where the ontology is saved\n # Note that filename should include format\n # when given\n returnpath = Path(dir) / f\"{layout[self]}.{fmt}\"\n for onto, path in layout.items():\n fname = Path(dir) / f\"{path}.{fmt}\"\n onto.save(\n filename=fname,\n format=format,\n dir=dir,\n mkdir=mkdir,\n overwrite=overwrite,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n **kwargs,\n )\n\n if write_catalog_file:\n catalog_files = set()\n irimap = {}\n for onto, path in layout.items():\n irimap[onto.get_version(as_iri=True)] = (\n f\"{dir}/{path}.{fmt}\"\n )\n catalog_files.add(Path(path).parent / catalog_file)\n\n for catfile in catalog_files:\n write_catalog(\n irimap.copy(),\n output=catfile,\n directory=dir,\n append=append_catalog,\n )\n elif squash:\n URIRef, RDF, OWL = rdflib.URIRef, rdflib.RDF, rdflib.OWL\n\n # Make a copy of the owlready2 graph object to not mess with\n # owlready2 internals\n graph = rdflib.Graph()\n for triple in self.world.as_rdflib_graph():\n graph.add(triple)\n\n # Add common namespaces unknown to rdflib\n extra_namespaces = [\n (\"\", self.base_iri),\n (\"swrl\", \"http://www.w3.org/2003/11/swrl#\"),\n (\"bibo\", \"http://purl.org/ontology/bibo/\"),\n ]\n for prefix, iri in extra_namespaces:\n graph.namespace_manager.bind(\n prefix, rdflib.Namespace(iri), override=False\n )\n\n # Remove all ontology-declarations in the graph that are\n # not the current ontology.\n for s, _, _ in graph.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n if str(s).rstrip(\"/#\") != self.base_iri.rstrip(\"/#\"):\n for (\n _,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (s, None, None)\n ):\n graph.remove((s, p, o))\n graph.remove((s, OWL.imports, None))\n\n # Insert correct IRI of the ontology\n if self.iri:\n base_iri = URIRef(self.base_iri)\n for s, p, o in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((URIRef(self.iri), p, o))\n\n graph.serialize(destination=filepath, format=format)\n elif format in OWLREADY2_FORMATS:\n super().save(file=filepath, format=fmt, **kwargs)\n else:\n # The try-finally clause is needed for cleanup and because\n # we have to provide delete=False to NamedTemporaryFile\n # since Windows does not allow to reopen an already open\n # file.\n try:\n with tempfile.NamedTemporaryFile(\n suffix=\".owl\", delete=False\n ) as handle:\n tmpfile = handle.name\n super().save(tmpfile, format=\"ntriples\", **kwargs)\n graph = rdflib.Graph()\n graph.parse(tmpfile, format=\"ntriples\")\n graph.namespace_manager.bind(\n \"\", rdflib.Namespace(self.base_iri)\n )\n if self.iri:\n base_iri = rdflib.URIRef(self.base_iri)\n for (\n s,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((rdflib.URIRef(self.iri), p, o))\n graph.serialize(destination=filepath, format=format)\n finally:\n os.remove(tmpfile)\n\n if write_catalog_file and not recursive:\n write_catalog(\n {self.get_version(as_iri=True): filepath},\n output=catalog_file,\n directory=dir,\n append=append_catalog,\n )\n return Path(returnpath)\n\n def copy(self):\n \"\"\"Return a copy of the ontology.\"\"\"\n with tempfile.TemporaryDirectory() as dirname:\n filename = self.save(\n dir=dirname,\n format=\"turtle\",\n recursive=True,\n write_catalog_file=True,\n mkdir=True,\n )\n ontology = get_ontology(filename).load()\n ontology.name = self.name\n return ontology\n\n def get_imported_ontologies(self, recursive=False):\n \"\"\"Return a list with imported ontologies.\n\n If `recursive` is `True`, ontologies imported by imported ontologies\n are also returned.\n \"\"\"\n\n def rec_imported(onto):\n for ontology in onto.imported_ontologies:\n # pylint: disable=possibly-used-before-assignment\n if ontology not in imported:\n imported.add(ontology)\n rec_imported(ontology)\n\n if recursive:\n imported = set()\n rec_imported(self)\n return list(imported)\n\n return self.imported_ontologies\n\n def get_entities( # pylint: disable=too-many-arguments\n self,\n imported=True,\n classes=True,\n individuals=True,\n object_properties=True,\n data_properties=True,\n annotation_properties=True,\n ):\n \"\"\"Return a generator over (optionally) all classes, individuals,\n object_properties, data_properties and annotation_properties.\n\n If `imported` is `True`, entities in imported ontologies will also\n be included.\n \"\"\"\n generator = []\n if classes:\n generator.append(self.classes(imported))\n if individuals:\n generator.append(self.individuals(imported))\n if object_properties:\n generator.append(self.object_properties(imported))\n if data_properties:\n generator.append(self.data_properties(imported))\n if annotation_properties:\n generator.append(self.annotation_properties(imported))\n yield from itertools.chain(*generator)\n\n def classes(self, imported=False):\n \"\"\"Returns an generator over all classes.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"classes\", imported=imported)\n\n def _entities(\n self, entity_type, imported=False\n ): # pylint: disable=too-many-branches\n \"\"\"Returns an generator over all entities of the desired type.\n This is a helper function for `classes()`, `individuals()`,\n `object_properties()`, `data_properties()` and\n `annotation_properties()`.\n\n Arguments:\n entity_type: The type of entity desired given as a string.\n Can be any of `classes`, `individuals`,\n `object_properties`, `data_properties` and\n `annotation_properties`.\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n\n generator = []\n if imported:\n ontologies = self.get_imported_ontologies(recursive=True)\n ontologies.append(self)\n for onto in ontologies:\n if entity_type == \"classes\":\n for cls in list(onto.classes()):\n generator.append(cls)\n elif entity_type == \"individuals\":\n for ind in list(onto.individuals()):\n generator.append(ind)\n elif entity_type == \"object_properties\":\n for prop in list(onto.object_properties()):\n generator.append(prop)\n elif entity_type == \"data_properties\":\n for prop in list(onto.data_properties()):\n generator.append(prop)\n elif entity_type == \"annotation_properties\":\n for prop in list(onto.annotation_properties()):\n generator.append(prop)\n else:\n if entity_type == \"classes\":\n generator = super().classes()\n elif entity_type == \"individuals\":\n generator = super().individuals()\n elif entity_type == \"object_properties\":\n generator = super().object_properties()\n elif entity_type == \"data_properties\":\n generator = super().data_properties()\n elif entity_type == \"annotation_properties\":\n generator = super().annotation_properties()\n\n yield from generator\n\n def individuals(self, imported=False):\n \"\"\"Returns an generator over all individuals.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"individuals\", imported=imported)\n\n def object_properties(self, imported=False):\n \"\"\"Returns an generator over all object_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"object_properties\", imported=imported)\n\n def data_properties(self, imported=False):\n \"\"\"Returns an generator over all data_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"data_properties\", imported=imported)\n\n def annotation_properties(self, imported=False):\n \"\"\"Returns an generator over all annotation_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n\n \"\"\"\n return self._entities(\"annotation_properties\", imported=imported)\n\n def get_root_classes(self, imported=False):\n \"\"\"Returns a list or root classes.\"\"\"\n return [\n cls\n for cls in self.classes(imported=imported)\n if not cls.ancestors().difference(set([cls, owlready2.Thing]))\n ]\n\n def get_root_object_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.object_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n\n def get_root_data_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.data_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n\n def get_roots(self, imported=False):\n \"\"\"Returns all class, object_property and data_property roots.\"\"\"\n roots = self.get_root_classes(imported=imported)\n roots.extend(self.get_root_object_properties(imported=imported))\n roots.extend(self.get_root_data_properties(imported=imported))\n return roots\n\n def sync_python_names(self, annotations=(\"prefLabel\", \"label\", \"altLabel\")):\n \"\"\"Update the `python_name` attribute of all properties.\n\n The python_name attribute will be set to the first non-empty\n annotation in the sequence of annotations in `annotations` for\n the property.\n \"\"\"\n\n def update(gen):\n for prop in gen:\n for annotation in annotations:\n if hasattr(prop, annotation) and getattr(prop, annotation):\n prop.python_name = getattr(prop, annotation).first()\n break\n\n update(\n self.get_entities(\n classes=False,\n individuals=False,\n object_properties=False,\n data_properties=False,\n )\n )\n update(\n self.get_entities(\n classes=False, individuals=False, annotation_properties=False\n )\n )\n\n def rename_entities(\n self,\n annotations=(\"prefLabel\", \"label\", \"altLabel\"),\n ):\n \"\"\"Set `name` of all entities to the first non-empty annotation in\n `annotations`.\n\n Warning, this method changes all IRIs in the ontology. However,\n it may be useful to make the ontology more readable and to work\n with it together with a triple store.\n \"\"\"\n for entity in self.get_entities():\n for annotation in annotations:\n if hasattr(entity, annotation):\n name = getattr(entity, annotation).first()\n if name:\n entity.name = name\n break\n\n def sync_reasoner(\n self, reasoner=\"HermiT\", include_imported=False, **kwargs\n ):\n \"\"\"Update current ontology by running the given reasoner.\n\n Supported values for `reasoner` are 'HermiT' (default), Pellet\n and 'FaCT++'.\n\n If `include_imported` is true, the reasoner will also reason\n over imported ontologies. Note that this may be **very** slow.\n\n Keyword arguments are passed to the underlying owlready2 function.\n \"\"\"\n # pylint: disable=too-many-branches\n\n removed_equivalent = defaultdict(list)\n removed_subclasses = defaultdict(list)\n\n if reasoner == \"FaCT++\":\n sync = sync_reasoner_factpp\n elif reasoner == \"Pellet\":\n sync = owlready2.sync_reasoner_pellet\n elif reasoner == \"HermiT\":\n sync = owlready2.sync_reasoner_hermit\n\n # Remove custom data propertyes, otherwise HermiT will crash\n datatype_iri = \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n\n for cls in self.classes(imported=include_imported):\n remove_eq = []\n for i, r in enumerate(cls.equivalent_to):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_eq.append(i)\n removed_equivalent[cls].append(r)\n for i in reversed(remove_eq):\n del cls.equivalent_to[i]\n\n remove_subcls = []\n for i, r in enumerate(cls.is_a):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_subcls.append(i)\n removed_subclasses[cls].append(r)\n for i in reversed(remove_subcls):\n del cls.is_a[i]\n\n else:\n raise ValueError(\n f\"Unknown reasoner '{reasoner}'. Supported reasoners \"\n \"are 'Pellet', 'HermiT' and 'FaCT++'.\"\n )\n\n # For some reason we must visit all entities once before running\n # the reasoner...\n list(self.get_entities())\n\n with self:\n if include_imported:\n sync(self.world, **kwargs)\n else:\n sync(self, **kwargs)\n\n # Restore removed custom data properties\n for cls, eqs in removed_equivalent.items():\n cls.extend(eqs)\n for cls, subcls in removed_subclasses.items():\n cls.extend(subcls)\n\n def sync_attributes( # pylint: disable=too-many-branches\n self,\n name_policy=None,\n name_prefix=\"\",\n class_docstring=\"comment\",\n sync_imported=False,\n ):\n \"\"\"This method is intended to be called after you have added new\n classes (typically via Python) to make sure that attributes like\n `label` and `comments` are defined.\n\n If a class, object property, data property or annotation\n property in the current ontology has no label, the name of\n the corresponding Python class will be assigned as label.\n\n If a class, object property, data property or annotation\n property has no comment, it will be assigned the docstring of\n the corresponding Python class.\n\n `name_policy` specify wether and how the names in the ontology\n should be updated. Valid values are:\n None not changed\n \"uuid\" `name_prefix` followed by a global unique id (UUID).\n If the name is already valid accoridng to this standard\n it will not be regenerated.\n \"sequential\" `name_prefix` followed a sequantial number.\n EMMO conventions imply ``name_policy=='uuid'``.\n\n If `sync_imported` is true, all imported ontologies are also\n updated.\n\n The `class_docstring` argument specifies the annotation that\n class docstrings are mapped to. Defaults to \"comment\".\n \"\"\"\n for cls in itertools.chain(\n self.classes(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n ):\n if not hasattr(cls, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=unused-variable\n class prefLabel(owlready2.label):\n pass\n\n cls.prefLabel = [locstr(cls.__name__, lang=\"en\")]\n elif not cls.prefLabel:\n cls.prefLabel.append(locstr(cls.__name__, lang=\"en\"))\n if class_docstring and hasattr(cls, \"__doc__\") and cls.__doc__:\n getattr(cls, class_docstring).append(\n locstr(inspect.cleandoc(cls.__doc__), lang=\"en\")\n )\n\n for ind in self.individuals():\n if not hasattr(ind, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=function-redefined\n class prefLabel(owlready2.label):\n iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n\n ind.prefLabel = [locstr(ind.name, lang=\"en\")]\n elif not ind.prefLabel:\n ind.prefLabel.append(locstr(ind.name, lang=\"en\"))\n\n chain = itertools.chain(\n self.classes(),\n self.individuals(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n )\n if name_policy == \"uuid\":\n for obj in chain:\n try:\n # Passing the following means that the name is valid\n # and need not be regenerated.\n if not obj.name.startswith(name_prefix):\n raise ValueError\n uuid.UUID(obj.name.lstrip(name_prefix), version=5)\n except ValueError:\n obj.name = name_prefix + str(\n uuid.uuid5(uuid.NAMESPACE_DNS, obj.name)\n )\n elif name_policy == \"sequential\":\n for obj in chain:\n counter = 0\n while f\"{self.base_iri}{name_prefix}{counter}\" in self:\n counter += 1\n obj.name = f\"{name_prefix}{counter}\"\n elif name_policy is not None:\n raise TypeError(f\"invalid name_policy: {name_policy!r}\")\n\n if sync_imported:\n for onto in self.imported_ontologies:\n onto.sync_attributes()\n\n def get_relations(self):\n \"\"\"Returns a generator for all relations.\"\"\"\n warnings.warn(\n \"Ontology.get_relations() is deprecated. Use \"\n \"onto.object_properties() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return self.object_properties()\n\n def get_annotations(self, entity):\n \"\"\"Returns a dict with annotations for `entity`. Entity may be given\n either as a ThingClass object or as a label.\"\"\"\n warnings.warn(\n \"Ontology.get_annotations(entity) is deprecated. Use \"\n \"entity.get_annotations() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n res = {\"comment\": getattr(entity, \"comment\", \"\")}\n for annotation in self.annotation_properties():\n res[annotation.label.first()] = [\n obj.strip('\"')\n for _, _, obj in self.get_triples(\n entity.storid, annotation.storid, None\n )\n ]\n return res\n\n def get_branch( # pylint: disable=too-many-arguments\n self,\n root,\n leaves=(),\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n sort=False,\n ):\n \"\"\"Returns a set with all direct and indirect subclasses of `root`.\n Any subclass found in the sequence `leaves` will be included in\n the returned list, but its subclasses will not. The elements\n of `leaves` may be ThingClass objects or labels.\n\n Subclasses of any subclass found in the sequence `leaves` will\n be excluded from the returned list, where the elements of `leaves`\n may be ThingClass objects or labels.\n\n If `include_leaves` is true, the leaves are included in the returned\n list, otherwise they are not.\n\n If `strict_leaves` is true, any descendant of a leaf will be excluded\n in the returned set.\n\n If given, `exclude` may be a sequence of classes, including\n their subclasses, to exclude from the output.\n\n If `sort` is True, a list sorted according to depth and label\n will be returned instead of a set.\n \"\"\"\n\n def _branch(root, leaves):\n if root not in leaves:\n branch = {\n root,\n }\n for cls in root.subclasses():\n # Defining a branch is actually quite tricky. Consider\n # the case:\n #\n # L isA R\n # A isA L\n # A isA R\n #\n # where R is the root, L is a leaf and A is a direct\n # child of both. Logically, since A is a child of the\n # leaf we want to skip A. But a strait forward imple-\n # mentation will see that A is a child of the root and\n # include it. Requireing that the R should be a strict\n # parent of A solves this.\n if root in cls.get_parents(strict=True):\n branch.update(_branch(cls, leaves))\n else:\n branch = (\n {\n root,\n }\n if include_leaves\n else set()\n )\n return branch\n\n if isinstance(root, str):\n root = self.get_by_label(root)\n\n leaves = set(\n self.get_by_label(leaf) if isinstance(leaf, str) else leaf\n for leaf in leaves\n )\n leaves.discard(root)\n\n if exclude:\n exclude = set(\n self.get_by_label(e) if isinstance(e, str) else e\n for e in exclude\n )\n leaves.update(exclude)\n\n branch = _branch(root, leaves)\n\n # Exclude all descendants of any leaf\n if strict_leaves:\n descendants = root.descendants()\n for leaf in leaves:\n if leaf in descendants:\n branch.difference_update(\n leaf.descendants(include_self=False)\n )\n\n if exclude:\n branch.difference_update(exclude)\n\n # Sort according to depth, then by label\n if sort:\n branch = sorted(\n sorted(branch, key=asstring),\n key=lambda x: len(x.mro()),\n )\n\n return branch\n\n def is_individual(self, entity):\n \"\"\"Returns true if entity is an individual.\"\"\"\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return isinstance(entity, owlready2.Thing)\n\n # FIXME - deprecate this method as soon the ThingClass property\n # `defined_class` works correct in Owlready2\n def is_defined(self, entity):\n \"\"\"Returns true if the entity is a defined class.\n\n Deprecated, use the `is_defined` property of the classes\n (ThingClass subclasses) instead.\n \"\"\"\n warnings.warn(\n \"This method is deprecated. Use the `is_defined` property of \"\n \"the classes instad.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return hasattr(entity, \"equivalent_to\") and bool(entity.equivalent_to)\n\n def get_version(self, as_iri=False) -> str:\n \"\"\"Returns the version number of the ontology as inferred from the\n owl:versionIRI tag or, if owl:versionIRI is not found, from\n owl:versionINFO.\n\n If `as_iri` is True, the full versionIRI is returned.\n \"\"\"\n version_iri_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionIRI\"\n )\n tokens = self.get_triples(s=self.storid, p=version_iri_storid)\n if (not tokens) and (as_iri is True):\n raise TypeError(\n \"No owl:versionIRI \"\n f\"in Ontology {self.base_iri!r}. \"\n \"Search for owl:versionInfo with as_iri=False\"\n )\n if tokens:\n _, _, obj = tokens[0]\n version_iri = self.world._unabbreviate(obj)\n if as_iri:\n return version_iri\n return infer_version(self.base_iri, version_iri)\n\n version_info_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionInfo\"\n )\n tokens = self.get_triples(s=self.storid, p=version_info_storid)\n if not tokens:\n raise TypeError(\n \"No versionIRI or versionInfo \" f\"in Ontology {self.base_iri!r}\"\n )\n _, _, version_info = tokens[0]\n return version_info.split(\"^^\")[0].strip('\"')\n\n def set_version(self, version=None, version_iri=None):\n \"\"\"Assign version to ontology by asigning owl:versionIRI.\n\n If `version` but not `version_iri` is provided, the version\n IRI will be the combination of `base_iri` and `version`.\n \"\"\"\n _version_iri = \"http://www.w3.org/2002/07/owl#versionIRI\"\n version_iri_storid = self.world._abbreviate(_version_iri)\n if self._has_obj_triple_spo( # pylint: disable=unexpected-keyword-arg\n # For some reason _has_obj_triples_spo exists in both\n # owlready2.namespace.Namespace (with arguments subject/predicate)\n # and in owlready2.triplelite._GraphManager (with arguments s/p)\n # owlready2.Ontology inherits from Namespace directly\n # and pylint checks that.\n # It actually accesses the one in triplelite.\n # subject=self.storid, predicate=version_iri_storid\n s=self.storid,\n p=version_iri_storid,\n ):\n self._del_obj_triple_spo(s=self.storid, p=version_iri_storid)\n\n if not version_iri:\n if not version:\n raise TypeError(\n \"Either `version` or `version_iri` must be provided\"\n )\n head, tail = self.base_iri.rstrip(\"#/\").rsplit(\"/\", 1)\n version_iri = \"/\".join([head, version, tail])\n\n self._add_obj_triple_spo(\n s=self.storid,\n p=self.world._abbreviate(_version_iri),\n o=self.world._abbreviate(version_iri),\n )\n\n def get_graph(self, **kwargs):\n \"\"\"Returns a new graph object. See emmo.graph.OntoGraph.\n\n Note that this method requires the Python graphviz package.\n \"\"\"\n # pylint: disable=import-outside-toplevel,cyclic-import\n from ontopy.graph import OntoGraph\n\n return OntoGraph(self, **kwargs)\n\n @staticmethod\n def common_ancestors(cls1, cls2):\n \"\"\"Return a list of common ancestors for `cls1` and `cls2`.\"\"\"\n return set(cls1.ancestors()).intersection(cls2.ancestors())\n\n def number_of_generations(self, descendant, ancestor):\n \"\"\"Return shortest distance from ancestor to descendant\"\"\"\n if ancestor not in descendant.ancestors():\n raise ValueError(\"Descendant is not a descendant of ancestor\")\n return self._number_of_generations(descendant, ancestor, 0)\n\n def _number_of_generations(self, descendant, ancestor, counter):\n \"\"\"Recursive help function to number_of_generations(), return\n distance between a ancestor-descendant pair (counter+1).\"\"\"\n if descendant.name == ancestor.name:\n return counter\n try:\n return min(\n self._number_of_generations(parent, ancestor, counter + 1)\n for parent in descendant.get_parents()\n if ancestor in parent.ancestors()\n )\n except ValueError:\n return counter\n\n def closest_common_ancestors(self, cls1, cls2):\n \"\"\"Returns a list with closest_common_ancestor for cls1 and cls2\"\"\"\n distances = {}\n for ancestor in self.common_ancestors(cls1, cls2):\n distances[ancestor] = self.number_of_generations(\n cls1, ancestor\n ) + self.number_of_generations(cls2, ancestor)\n return [\n ancestor\n for ancestor, distance in distances.items()\n if distance == min(distances.values())\n ]\n\n @staticmethod\n def closest_common_ancestor(*classes):\n \"\"\"Returns closest_common_ancestor for the given classes.\"\"\"\n mros = [cls.mro() for cls in classes]\n track = defaultdict(int)\n while mros:\n for mro in mros:\n cur = mro.pop(0)\n track[cur] += 1\n if track[cur] == len(classes):\n return cur\n if len(mro) == 0:\n mros.remove(mro)\n raise EMMOntoPyException(\n \"A closest common ancestor should always exist !\"\n )\n\n def get_ancestors(\n self,\n classes: \"Union[List, ThingClass]\",\n closest: bool = False,\n generations: int = None,\n strict: bool = True,\n ) -> set:\n \"\"\"Return ancestors of all classes in `classes`.\n Args:\n classes: class(es) for which ancestors should be returned.\n generations: Include this number of generations, default is all.\n closest: If True, return all ancestors up to and including the\n closest common ancestor. Return all if False.\n strict: If True returns only real ancestors, i.e. `classes` are\n are not included in the returned set.\n Returns:\n Set of ancestors to `classes`.\n \"\"\"\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n ancestors = set()\n if not classes:\n return ancestors\n\n def addancestors(entity, counter, subject):\n if counter > 0:\n for parent in entity.get_parents(strict=True):\n subject.add(parent)\n addancestors(parent, counter - 1, subject)\n\n if closest:\n if generations is not None:\n raise ValueError(\n \"Only one of `generations` or `closest` may be specified.\"\n )\n\n closest_ancestor = self.closest_common_ancestor(*classes)\n for cls in classes:\n ancestors.update(\n anc\n for anc in cls.ancestors()\n if closest_ancestor in anc.ancestors()\n )\n elif isinstance(generations, int):\n for entity in classes:\n addancestors(entity, generations, ancestors)\n else:\n ancestors.update(*(cls.ancestors() for cls in classes))\n\n if strict:\n return ancestors.difference(classes)\n return ancestors\n\n def get_descendants(\n self,\n classes: \"Union[List, ThingClass]\",\n generations: int = None,\n common: bool = False,\n ) -> set:\n \"\"\"Return descendants/subclasses of all classes in `classes`.\n Args:\n classes: class(es) for which descendants are desired.\n common: whether to only return descendants common to all classes.\n generations: Include this number of generations, default is all.\n Returns:\n A set of descendants for given number of generations.\n If 'common'=True, the common descendants are returned\n within the specified number of generations.\n 'generations' defaults to all.\n \"\"\"\n\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n descendants = {name: [] for name in classes}\n\n def _children_recursively(num, newentity, parent, descendants):\n \"\"\"Helper function to get all children up to generation.\"\"\"\n for child in self.get_children_of(newentity):\n descendants[parent].append(child)\n if num < generations:\n _children_recursively(num + 1, child, parent, descendants)\n\n if generations == 0:\n return set()\n\n if not generations:\n for entity in classes:\n descendants[entity] = entity.descendants()\n # only include proper descendants\n descendants[entity].remove(entity)\n else:\n for entity in classes:\n _children_recursively(1, entity, entity, descendants)\n\n results = descendants.values()\n if common is True:\n return set.intersection(*map(set, results))\n return set(flatten(results))\n\n def get_wu_palmer_measure(self, cls1, cls2):\n \"\"\"Return Wu-Palmer measure for semantic similarity.\n\n Returns Wu-Palmer measure for semantic similarity between\n two concepts.\n Wu, Palmer; ACL 94: Proceedings of the 32nd annual meeting on\n Association for Computational Linguistics, June 1994.\n \"\"\"\n cca = self.closest_common_ancestor(cls1, cls2)\n ccadepth = self.number_of_generations(cca, self.Thing)\n generations1 = self.number_of_generations(cls1, cca)\n generations2 = self.number_of_generations(cls2, cca)\n return 2 * ccadepth / (generations1 + generations2 + 2 * ccadepth)\n\n def new_entity(\n self,\n name: str,\n parent: Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n Iterable,\n ],\n entitytype: Optional[\n Union[\n str,\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]\n ] = \"class\",\n preflabel: Optional[str] = None,\n ) -> Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]:\n \"\"\"Create and return new entity\n\n Args:\n name: name of the entity\n parent: parent(s) of the entity\n entitytype: type of the entity,\n default is 'class' (str) 'ThingClass' (owlready2 Python class).\n Other options\n are 'data_property', 'object_property',\n 'annotation_property' (strings) or the\n Python classes ObjectPropertyClass,\n DataPropertyClass and AnnotationProperty classes.\n preflabel: if given, add this as a skos:prefLabel annotation\n to the new entity. If None (default), `name` will\n be added as prefLabel if skos:prefLabel is in the ontology\n and listed in `self.label_annotations`. Set `preflabel` to\n False, to avoid assigning a prefLabel.\n\n Returns:\n the new entity.\n\n Throws exception if name consists of more than one word, if type is not\n one of the allowed types, or if parent is not of the correct type.\n By default, the parent is Thing.\n\n \"\"\"\n # pylint: disable=invalid-name\n if \" \" in name:\n raise LabelDefinitionError(\n f\"Error in label name definition '{name}': \"\n f\"Label consists of more than one word.\"\n )\n parents = tuple(parent) if isinstance(parent, Iterable) else (parent,)\n if entitytype == \"class\":\n parenttype = owlready2.ThingClass\n elif entitytype == \"data_property\":\n parenttype = owlready2.DataPropertyClass\n elif entitytype == \"object_property\":\n parenttype = owlready2.ObjectPropertyClass\n elif entitytype == \"annotation_property\":\n parenttype = owlready2.AnnotationPropertyClass\n elif entitytype in [\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]:\n parenttype = entitytype\n else:\n raise EntityClassDefinitionError(\n f\"Error in entity type definition: \"\n f\"'{entitytype}' is not a valid entity type.\"\n )\n for thing in parents:\n if not isinstance(thing, parenttype):\n raise EntityClassDefinitionError(\n f\"Error in parent definition: \"\n f\"'{thing}' is not an {parenttype}.\"\n )\n\n with self:\n entity = types.new_class(name, parents)\n\n preflabel_iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n if preflabel:\n if not self.world[preflabel_iri]:\n pref_label = self.new_annotation_property(\n \"prefLabel\",\n parent=[owlready2.AnnotationProperty],\n )\n pref_label.iri = preflabel_iri\n entity.prefLabel = english(preflabel)\n elif (\n preflabel is None\n and preflabel_iri in self.label_annotations\n and self.world[preflabel_iri]\n ):\n entity.prefLabel = english(name)\n\n return entity\n\n # Method that creates new ThingClass using new_entity\n def new_class(\n self, name: str, parent: Union[ThingClass, Iterable]\n ) -> ThingClass:\n \"\"\"Create and return new class.\n\n Args:\n name: name of the class\n parent: parent(s) of the class\n\n Returns:\n the new class.\n \"\"\"\n return self.new_entity(name, parent, \"class\")\n\n # Method that creates new ObjectPropertyClass using new_entity\n def new_object_property(\n self, name: str, parent: Union[ObjectPropertyClass, Iterable]\n ) -> ObjectPropertyClass:\n \"\"\"Create and return new object property.\n\n Args:\n name: name of the object property\n parent: parent(s) of the object property\n\n Returns:\n the new object property.\n \"\"\"\n return self.new_entity(name, parent, \"object_property\")\n\n # Method that creates new DataPropertyClass using new_entity\n def new_data_property(\n self, name: str, parent: Union[DataPropertyClass, Iterable]\n ) -> DataPropertyClass:\n \"\"\"Create and return new data property.\n\n Args:\n name: name of the data property\n parent: parent(s) of the data property\n\n Returns:\n the new data property.\n \"\"\"\n return self.new_entity(name, parent, \"data_property\")\n\n # Method that creates new AnnotationPropertyClass using new_entity\n def new_annotation_property(\n self, name: str, parent: Union[AnnotationPropertyClass, Iterable]\n ) -> AnnotationPropertyClass:\n \"\"\"Create and return new annotation property.\n\n Args:\n name: name of the annotation property\n parent: parent(s) of the annotation property\n\n Returns:\n the new annotation property.\n \"\"\"\n return self.new_entity(name, parent, \"annotation_property\")\n\n def difference(self, other: owlready2.Ontology) -> set:\n \"\"\"Return a set of triples that are in this, but not in the\n `other` ontology.\"\"\"\n # pylint: disable=invalid-name\n s1 = set(self.get_unabbreviated_triples(blank=\"_:b\"))\n s2 = set(other.get_unabbreviated_triples(blank=\"_:b\"))\n return s1.difference(s2)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.colon_in_label","title":"colon_in_label
property
writable
","text":"Whether to accept colon in name-part of IRI. If true, the name cannot be prefixed.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_imported","title":"dir_imported
property
writable
","text":"Whether to include imported ontologies in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_label","title":"dir_label
property
writable
","text":"Whether to include entity label in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_name","title":"dir_name
property
writable
","text":"Whether to include entity name in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.dir_preflabel","title":"dir_preflabel
property
writable
","text":"Whether to include entity prefLabel in dir() listing.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.add_label_annotation","title":"add_label_annotation(self, iri)
","text":"Adds label annotation used by get_by_label().
Source code inontopy/ontology.py
def add_label_annotation(self, iri):\n \"\"\"Adds label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.add_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n if iri not in self.label_annotations:\n self.label_annotations.append(iri)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.annotation_properties","title":"annotation_properties(self, imported=False)
","text":"Returns an generator over all annotation_properties.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def annotation_properties(self, imported=False):\n \"\"\"Returns an generator over all annotation_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n\n \"\"\"\n return self._entities(\"annotation_properties\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.classes","title":"classes(self, imported=False)
","text":"Returns an generator over all classes.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def classes(self, imported=False):\n \"\"\"Returns an generator over all classes.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"classes\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.closest_common_ancestor","title":"closest_common_ancestor(*classes)
staticmethod
","text":"Returns closest_common_ancestor for the given classes.
Source code inontopy/ontology.py
@staticmethod\ndef closest_common_ancestor(*classes):\n \"\"\"Returns closest_common_ancestor for the given classes.\"\"\"\n mros = [cls.mro() for cls in classes]\n track = defaultdict(int)\n while mros:\n for mro in mros:\n cur = mro.pop(0)\n track[cur] += 1\n if track[cur] == len(classes):\n return cur\n if len(mro) == 0:\n mros.remove(mro)\n raise EMMOntoPyException(\n \"A closest common ancestor should always exist !\"\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.closest_common_ancestors","title":"closest_common_ancestors(self, cls1, cls2)
","text":"Returns a list with closest_common_ancestor for cls1 and cls2
Source code inontopy/ontology.py
def closest_common_ancestors(self, cls1, cls2):\n \"\"\"Returns a list with closest_common_ancestor for cls1 and cls2\"\"\"\n distances = {}\n for ancestor in self.common_ancestors(cls1, cls2):\n distances[ancestor] = self.number_of_generations(\n cls1, ancestor\n ) + self.number_of_generations(cls2, ancestor)\n return [\n ancestor\n for ancestor, distance in distances.items()\n if distance == min(distances.values())\n ]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.common_ancestors","title":"common_ancestors(cls1, cls2)
staticmethod
","text":"Return a list of common ancestors for cls1
and cls2
.
ontopy/ontology.py
@staticmethod\ndef common_ancestors(cls1, cls2):\n \"\"\"Return a list of common ancestors for `cls1` and `cls2`.\"\"\"\n return set(cls1.ancestors()).intersection(cls2.ancestors())\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.copy","title":"copy(self)
","text":"Return a copy of the ontology.
Source code inontopy/ontology.py
def copy(self):\n \"\"\"Return a copy of the ontology.\"\"\"\n with tempfile.TemporaryDirectory() as dirname:\n filename = self.save(\n dir=dirname,\n format=\"turtle\",\n recursive=True,\n write_catalog_file=True,\n mkdir=True,\n )\n ontology = get_ontology(filename).load()\n ontology.name = self.name\n return ontology\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.data_properties","title":"data_properties(self, imported=False)
","text":"Returns an generator over all data_properties.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def data_properties(self, imported=False):\n \"\"\"Returns an generator over all data_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"data_properties\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.difference","title":"difference(self, other)
","text":"Return a set of triples that are in this, but not in the other
ontology.
ontopy/ontology.py
def difference(self, other: owlready2.Ontology) -> set:\n \"\"\"Return a set of triples that are in this, but not in the\n `other` ontology.\"\"\"\n # pylint: disable=invalid-name\n s1 = set(self.get_unabbreviated_triples(blank=\"_:b\"))\n s2 = set(other.get_unabbreviated_triples(blank=\"_:b\"))\n return s1.difference(s2)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_ancestors","title":"get_ancestors(self, classes, closest=False, generations=None, strict=True)
","text":"Return ancestors of all classes in classes
.
Parameters:
Name Type Description Defaultclasses
Union[List, ThingClass]
class(es) for which ancestors should be returned.
requiredgenerations
int
Include this number of generations, default is all.
None
closest
bool
If True, return all ancestors up to and including the closest common ancestor. Return all if False.
False
strict
bool
If True returns only real ancestors, i.e. classes
are are not included in the returned set.
True
Returns:
Type Descriptionset
Set of ancestors to classes
.
ontopy/ontology.py
def get_ancestors(\n self,\n classes: \"Union[List, ThingClass]\",\n closest: bool = False,\n generations: int = None,\n strict: bool = True,\n) -> set:\n \"\"\"Return ancestors of all classes in `classes`.\n Args:\n classes: class(es) for which ancestors should be returned.\n generations: Include this number of generations, default is all.\n closest: If True, return all ancestors up to and including the\n closest common ancestor. Return all if False.\n strict: If True returns only real ancestors, i.e. `classes` are\n are not included in the returned set.\n Returns:\n Set of ancestors to `classes`.\n \"\"\"\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n ancestors = set()\n if not classes:\n return ancestors\n\n def addancestors(entity, counter, subject):\n if counter > 0:\n for parent in entity.get_parents(strict=True):\n subject.add(parent)\n addancestors(parent, counter - 1, subject)\n\n if closest:\n if generations is not None:\n raise ValueError(\n \"Only one of `generations` or `closest` may be specified.\"\n )\n\n closest_ancestor = self.closest_common_ancestor(*classes)\n for cls in classes:\n ancestors.update(\n anc\n for anc in cls.ancestors()\n if closest_ancestor in anc.ancestors()\n )\n elif isinstance(generations, int):\n for entity in classes:\n addancestors(entity, generations, ancestors)\n else:\n ancestors.update(*(cls.ancestors() for cls in classes))\n\n if strict:\n return ancestors.difference(classes)\n return ancestors\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_annotations","title":"get_annotations(self, entity)
","text":"Returns a dict with annotations for entity
. Entity may be given either as a ThingClass object or as a label.
ontopy/ontology.py
def get_annotations(self, entity):\n \"\"\"Returns a dict with annotations for `entity`. Entity may be given\n either as a ThingClass object or as a label.\"\"\"\n warnings.warn(\n \"Ontology.get_annotations(entity) is deprecated. Use \"\n \"entity.get_annotations() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n res = {\"comment\": getattr(entity, \"comment\", \"\")}\n for annotation in self.annotation_properties():\n res[annotation.label.first()] = [\n obj.strip('\"')\n for _, _, obj in self.get_triples(\n entity.storid, annotation.storid, None\n )\n ]\n return res\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_branch","title":"get_branch(self, root, leaves=(), include_leaves=True, strict_leaves=False, exclude=None, sort=False)
","text":"Returns a set with all direct and indirect subclasses of root
. Any subclass found in the sequence leaves
will be included in the returned list, but its subclasses will not. The elements of leaves
may be ThingClass objects or labels.
Subclasses of any subclass found in the sequence leaves
will be excluded from the returned list, where the elements of leaves
may be ThingClass objects or labels.
If include_leaves
is true, the leaves are included in the returned list, otherwise they are not.
If strict_leaves
is true, any descendant of a leaf will be excluded in the returned set.
If given, exclude
may be a sequence of classes, including their subclasses, to exclude from the output.
If sort
is True, a list sorted according to depth and label will be returned instead of a set.
ontopy/ontology.py
def get_branch( # pylint: disable=too-many-arguments\n self,\n root,\n leaves=(),\n include_leaves=True,\n strict_leaves=False,\n exclude=None,\n sort=False,\n):\n \"\"\"Returns a set with all direct and indirect subclasses of `root`.\n Any subclass found in the sequence `leaves` will be included in\n the returned list, but its subclasses will not. The elements\n of `leaves` may be ThingClass objects or labels.\n\n Subclasses of any subclass found in the sequence `leaves` will\n be excluded from the returned list, where the elements of `leaves`\n may be ThingClass objects or labels.\n\n If `include_leaves` is true, the leaves are included in the returned\n list, otherwise they are not.\n\n If `strict_leaves` is true, any descendant of a leaf will be excluded\n in the returned set.\n\n If given, `exclude` may be a sequence of classes, including\n their subclasses, to exclude from the output.\n\n If `sort` is True, a list sorted according to depth and label\n will be returned instead of a set.\n \"\"\"\n\n def _branch(root, leaves):\n if root not in leaves:\n branch = {\n root,\n }\n for cls in root.subclasses():\n # Defining a branch is actually quite tricky. Consider\n # the case:\n #\n # L isA R\n # A isA L\n # A isA R\n #\n # where R is the root, L is a leaf and A is a direct\n # child of both. Logically, since A is a child of the\n # leaf we want to skip A. But a strait forward imple-\n # mentation will see that A is a child of the root and\n # include it. Requireing that the R should be a strict\n # parent of A solves this.\n if root in cls.get_parents(strict=True):\n branch.update(_branch(cls, leaves))\n else:\n branch = (\n {\n root,\n }\n if include_leaves\n else set()\n )\n return branch\n\n if isinstance(root, str):\n root = self.get_by_label(root)\n\n leaves = set(\n self.get_by_label(leaf) if isinstance(leaf, str) else leaf\n for leaf in leaves\n )\n leaves.discard(root)\n\n if exclude:\n exclude = set(\n self.get_by_label(e) if isinstance(e, str) else e\n for e in exclude\n )\n leaves.update(exclude)\n\n branch = _branch(root, leaves)\n\n # Exclude all descendants of any leaf\n if strict_leaves:\n descendants = root.descendants()\n for leaf in leaves:\n if leaf in descendants:\n branch.difference_update(\n leaf.descendants(include_self=False)\n )\n\n if exclude:\n branch.difference_update(exclude)\n\n # Sort according to depth, then by label\n if sort:\n branch = sorted(\n sorted(branch, key=asstring),\n key=lambda x: len(x.mro()),\n )\n\n return branch\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_by_label","title":"get_by_label(self, label, label_annotations=None, prefix=None, imported=True, colon_in_label=None)
","text":"Returns entity with label annotation label
.
Parameters:
Name Type Description Defaultlabel
str
label so search for. May be written as 'label' or 'prefix:label'. get_by_label('prefix:label') == get_by_label('label', prefix='prefix').
requiredlabel_annotations
str
a sequence of label annotation names to look up. Defaults to the label_annotations
property.
None
prefix
str
if provided, it should be the last component of the base iri of an ontology (with trailing slash (/) or hash (#) stripped off). The search for a matching label will be limited to this namespace.
None
imported
bool
Whether to also look for label
in imported ontologies.
True
colon_in_label
bool
Whether to accept colon (:) in a label or name-part of IRI. Defaults to the colon_in_label
property of self
. Setting this true cannot be combined with prefix
.
None
If several entities have the same label, only the one which is found first is returned.Use get_by_label_all() to get all matches.
Note, if different prefixes are provided in the label and via the prefix
argument a warning will be issued and the prefix
argument will take precedence.
A NoSuchLabelError is raised if label
cannot be found.
ontopy/ontology.py
def get_by_label(\n self,\n label: str,\n label_annotations: str = None,\n prefix: str = None,\n imported: bool = True,\n colon_in_label: bool = None,\n):\n \"\"\"Returns entity with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'.\n get_by_label('prefix:label') ==\n get_by_label('label', prefix='prefix').\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n imported: Whether to also look for `label` in imported ontologies.\n colon_in_label: Whether to accept colon (:) in a label or name-part\n of IRI. Defaults to the `colon_in_label` property of `self`.\n Setting this true cannot be combined with `prefix`.\n\n If several entities have the same label, only the one which is\n found first is returned.Use get_by_label_all() to get all matches.\n\n Note, if different prefixes are provided in the label and via\n the `prefix` argument a warning will be issued and the\n `prefix` argument will take precedence.\n\n A NoSuchLabelError is raised if `label` cannot be found.\n \"\"\"\n # pylint: disable=too-many-arguments,too-many-branches,invalid-name\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, must be a string: '{label}'\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n if colon_in_label is None:\n colon_in_label = self._colon_in_label\n if colon_in_label:\n if prefix:\n raise ValueError(\n \"`prefix` cannot be combined with `colon_in_label`\"\n )\n else:\n splitlabel = label.split(\":\", 1)\n if len(splitlabel) == 2 and not splitlabel[1].startswith(\"//\"):\n label = splitlabel[1]\n if prefix and prefix != splitlabel[0]:\n warnings.warn(\n f\"Prefix given both as argument ({prefix}) \"\n f\"and in label ({splitlabel[0]}). \"\n \"Prefix given in argument takes precedence. \"\n )\n if not prefix:\n prefix = splitlabel[0]\n\n if prefix:\n entityset = self.get_by_label_all(\n label,\n label_annotations=label_annotations,\n prefix=prefix,\n )\n if len(entityset) == 1:\n return entityset.pop()\n if len(entityset) > 1:\n raise AmbiguousLabelError(\n f\"Several entities have the same label '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n raise NoSuchLabelError(\n f\"No label annotations matches for '{label}' \"\n f\"with prefix '{prefix}'.\"\n )\n\n # Label is a full IRI\n entity = self.world[label]\n if entity:\n return entity\n\n get_triples = (\n self.world._get_data_triples_spod_spod\n if imported\n else self._get_data_triples_spod_spod\n )\n\n for storid in self._to_storids(label_annotations):\n for s, _, _, _ in get_triples(None, storid, label, None):\n return self.world[self._unabbreviate(s)]\n\n # Special labels\n if self._special_labels and label in self._special_labels:\n return self._special_labels[label]\n\n # Check if label is a name under base_iri\n entity = self.world[self.base_iri + label]\n if entity:\n return entity\n\n # Check label is the name of an entity\n for entity in self.get_entities(imported=imported):\n if label == entity.name:\n return entity\n\n raise NoSuchLabelError(f\"No label annotations matches '{label}'\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_by_label_all","title":"get_by_label_all(self, label, label_annotations=None, prefix=None, exact_match=False)
","text":"Returns set of entities with label annotation label
.
Parameters:
Name Type Description Defaultlabel
label so search for. May be written as 'label' or 'prefix:label'. Wildcard matching using glob pattern is also supported if exact_match
is set to false.
label_annotations
a sequence of label annotation names to look up. Defaults to the label_annotations
property.
None
prefix
if provided, it should be the last component of the base iri of an ontology (with trailing slash (/) or hash (#) stripped off). The search for a matching label will be limited to this namespace.
None
exact_match
Do not treat \"*\" and brackets as special characters when matching. May be useful if your ontology has labels containing such labels.
False
Returns:
Type DescriptionSet[Optional[owlready2.entity.EntityClass]]
Set of all matching entities or an empty set if no matches could be found.
Source code inontopy/ontology.py
def get_by_label_all(\n self,\n label,\n label_annotations=None,\n prefix=None,\n exact_match=False,\n) -> \"Set[Optional[owlready2.entity.EntityClass]]\":\n \"\"\"Returns set of entities with label annotation `label`.\n\n Arguments:\n label: label so search for.\n May be written as 'label' or 'prefix:label'. Wildcard matching\n using glob pattern is also supported if `exact_match` is set to\n false.\n label_annotations: a sequence of label annotation names to look up.\n Defaults to the `label_annotations` property.\n prefix: if provided, it should be the last component of\n the base iri of an ontology (with trailing slash (/) or hash\n (#) stripped off). The search for a matching label will be\n limited to this namespace.\n exact_match: Do not treat \"*\" and brackets as special characters\n when matching. May be useful if your ontology has labels\n containing such labels.\n\n Returns:\n Set of all matching entities or an empty set if no matches\n could be found.\n \"\"\"\n if not isinstance(label, str):\n raise TypeError(\n f\"Invalid label definition, \" f\"must be a string: {label!r}\"\n )\n if \" \" in label:\n raise ValueError(\n f\"Invalid label definition, {label!r} contains spaces.\"\n )\n\n if label_annotations is None:\n label_annotations = self.label_annotations\n\n entities = set()\n\n # Check label annotations\n if exact_match:\n for storid in self._to_storids(label_annotations):\n entities.update(\n self.world._get_by_storid(s)\n for s, _, _ in self.world._get_data_triples_spod_spod(\n None, storid, str(label), None\n )\n )\n else:\n for storid in self._to_storids(label_annotations):\n label_entity = self._unabbreviate(storid)\n key = (\n label_entity.name\n if hasattr(label_entity, \"name\")\n else label_entity\n )\n entities.update(self.world.search(**{key: label}))\n\n if self._special_labels and label in self._special_labels:\n entities.update(self._special_labels[label])\n\n # Check name-part of IRI\n if exact_match:\n entities.update(\n ent for ent in self.get_entities() if ent.name == str(label)\n )\n else:\n matches = fnmatch.filter(\n (ent.name for ent in self.get_entities()), label\n )\n entities.update(\n ent for ent in self.get_entities() if ent.name in matches\n )\n\n if prefix:\n return set(\n ent\n for ent in entities\n if ent.namespace.ontology.prefix == prefix\n )\n return entities\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_descendants","title":"get_descendants(self, classes, generations=None, common=False)
","text":"Return descendants/subclasses of all classes in classes
.
Parameters:
Name Type Description Defaultclasses
Union[List, ThingClass]
class(es) for which descendants are desired.
requiredcommon
bool
whether to only return descendants common to all classes.
False
generations
int
Include this number of generations, default is all.
None
Returns:
Type Descriptionset
A set of descendants for given number of generations. If 'common'=True, the common descendants are returned within the specified number of generations. 'generations' defaults to all.
Source code inontopy/ontology.py
def get_descendants(\n self,\n classes: \"Union[List, ThingClass]\",\n generations: int = None,\n common: bool = False,\n) -> set:\n \"\"\"Return descendants/subclasses of all classes in `classes`.\n Args:\n classes: class(es) for which descendants are desired.\n common: whether to only return descendants common to all classes.\n generations: Include this number of generations, default is all.\n Returns:\n A set of descendants for given number of generations.\n If 'common'=True, the common descendants are returned\n within the specified number of generations.\n 'generations' defaults to all.\n \"\"\"\n\n if not isinstance(classes, Iterable):\n classes = [classes]\n\n descendants = {name: [] for name in classes}\n\n def _children_recursively(num, newentity, parent, descendants):\n \"\"\"Helper function to get all children up to generation.\"\"\"\n for child in self.get_children_of(newentity):\n descendants[parent].append(child)\n if num < generations:\n _children_recursively(num + 1, child, parent, descendants)\n\n if generations == 0:\n return set()\n\n if not generations:\n for entity in classes:\n descendants[entity] = entity.descendants()\n # only include proper descendants\n descendants[entity].remove(entity)\n else:\n for entity in classes:\n _children_recursively(1, entity, entity, descendants)\n\n results = descendants.values()\n if common is True:\n return set.intersection(*map(set, results))\n return set(flatten(results))\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_entities","title":"get_entities(self, imported=True, classes=True, individuals=True, object_properties=True, data_properties=True, annotation_properties=True)
","text":"Return a generator over (optionally) all classes, individuals, object_properties, data_properties and annotation_properties.
If imported
is True
, entities in imported ontologies will also be included.
ontopy/ontology.py
def get_entities( # pylint: disable=too-many-arguments\n self,\n imported=True,\n classes=True,\n individuals=True,\n object_properties=True,\n data_properties=True,\n annotation_properties=True,\n):\n \"\"\"Return a generator over (optionally) all classes, individuals,\n object_properties, data_properties and annotation_properties.\n\n If `imported` is `True`, entities in imported ontologies will also\n be included.\n \"\"\"\n generator = []\n if classes:\n generator.append(self.classes(imported))\n if individuals:\n generator.append(self.individuals(imported))\n if object_properties:\n generator.append(self.object_properties(imported))\n if data_properties:\n generator.append(self.data_properties(imported))\n if annotation_properties:\n generator.append(self.annotation_properties(imported))\n yield from itertools.chain(*generator)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_graph","title":"get_graph(self, **kwargs)
","text":"Returns a new graph object. See emmo.graph.OntoGraph.
Note that this method requires the Python graphviz package.
Source code inontopy/ontology.py
def get_graph(self, **kwargs):\n \"\"\"Returns a new graph object. See emmo.graph.OntoGraph.\n\n Note that this method requires the Python graphviz package.\n \"\"\"\n # pylint: disable=import-outside-toplevel,cyclic-import\n from ontopy.graph import OntoGraph\n\n return OntoGraph(self, **kwargs)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_imported_ontologies","title":"get_imported_ontologies(self, recursive=False)
","text":"Return a list with imported ontologies.
If recursive
is True
, ontologies imported by imported ontologies are also returned.
ontopy/ontology.py
def get_imported_ontologies(self, recursive=False):\n \"\"\"Return a list with imported ontologies.\n\n If `recursive` is `True`, ontologies imported by imported ontologies\n are also returned.\n \"\"\"\n\n def rec_imported(onto):\n for ontology in onto.imported_ontologies:\n # pylint: disable=possibly-used-before-assignment\n if ontology not in imported:\n imported.add(ontology)\n rec_imported(ontology)\n\n if recursive:\n imported = set()\n rec_imported(self)\n return list(imported)\n\n return self.imported_ontologies\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_relations","title":"get_relations(self)
","text":"Returns a generator for all relations.
Source code inontopy/ontology.py
def get_relations(self):\n \"\"\"Returns a generator for all relations.\"\"\"\n warnings.warn(\n \"Ontology.get_relations() is deprecated. Use \"\n \"onto.object_properties() instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return self.object_properties()\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_root_classes","title":"get_root_classes(self, imported=False)
","text":"Returns a list or root classes.
Source code inontopy/ontology.py
def get_root_classes(self, imported=False):\n \"\"\"Returns a list or root classes.\"\"\"\n return [\n cls\n for cls in self.classes(imported=imported)\n if not cls.ancestors().difference(set([cls, owlready2.Thing]))\n ]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_root_data_properties","title":"get_root_data_properties(self, imported=False)
","text":"Returns a list of root object properties.
Source code inontopy/ontology.py
def get_root_data_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.data_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_root_object_properties","title":"get_root_object_properties(self, imported=False)
","text":"Returns a list of root object properties.
Source code inontopy/ontology.py
def get_root_object_properties(self, imported=False):\n \"\"\"Returns a list of root object properties.\"\"\"\n props = set(self.object_properties(imported=imported))\n return [p for p in props if not props.intersection(p.is_a)]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_roots","title":"get_roots(self, imported=False)
","text":"Returns all class, object_property and data_property roots.
Source code inontopy/ontology.py
def get_roots(self, imported=False):\n \"\"\"Returns all class, object_property and data_property roots.\"\"\"\n roots = self.get_root_classes(imported=imported)\n roots.extend(self.get_root_object_properties(imported=imported))\n roots.extend(self.get_root_data_properties(imported=imported))\n return roots\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_unabbreviated_triples","title":"get_unabbreviated_triples(self, subject=None, predicate=None, obj=None, blank=None)
","text":"Returns all matching triples unabbreviated.
If blank
is given, it will be used to represent blank nodes.
ontopy/ontology.py
def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n):\n \"\"\"Returns all matching triples unabbreviated.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n # pylint: disable=invalid-name\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_version","title":"get_version(self, as_iri=False)
","text":"Returns the version number of the ontology as inferred from the owl:versionIRI tag or, if owl:versionIRI is not found, from owl:versionINFO.
If as_iri
is True, the full versionIRI is returned.
ontopy/ontology.py
def get_version(self, as_iri=False) -> str:\n \"\"\"Returns the version number of the ontology as inferred from the\n owl:versionIRI tag or, if owl:versionIRI is not found, from\n owl:versionINFO.\n\n If `as_iri` is True, the full versionIRI is returned.\n \"\"\"\n version_iri_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionIRI\"\n )\n tokens = self.get_triples(s=self.storid, p=version_iri_storid)\n if (not tokens) and (as_iri is True):\n raise TypeError(\n \"No owl:versionIRI \"\n f\"in Ontology {self.base_iri!r}. \"\n \"Search for owl:versionInfo with as_iri=False\"\n )\n if tokens:\n _, _, obj = tokens[0]\n version_iri = self.world._unabbreviate(obj)\n if as_iri:\n return version_iri\n return infer_version(self.base_iri, version_iri)\n\n version_info_storid = self.world._abbreviate(\n \"http://www.w3.org/2002/07/owl#versionInfo\"\n )\n tokens = self.get_triples(s=self.storid, p=version_info_storid)\n if not tokens:\n raise TypeError(\n \"No versionIRI or versionInfo \" f\"in Ontology {self.base_iri!r}\"\n )\n _, _, version_info = tokens[0]\n return version_info.split(\"^^\")[0].strip('\"')\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.get_wu_palmer_measure","title":"get_wu_palmer_measure(self, cls1, cls2)
","text":"Return Wu-Palmer measure for semantic similarity.
Returns Wu-Palmer measure for semantic similarity between two concepts. Wu, Palmer; ACL 94: Proceedings of the 32nd annual meeting on Association for Computational Linguistics, June 1994.
Source code inontopy/ontology.py
def get_wu_palmer_measure(self, cls1, cls2):\n \"\"\"Return Wu-Palmer measure for semantic similarity.\n\n Returns Wu-Palmer measure for semantic similarity between\n two concepts.\n Wu, Palmer; ACL 94: Proceedings of the 32nd annual meeting on\n Association for Computational Linguistics, June 1994.\n \"\"\"\n cca = self.closest_common_ancestor(cls1, cls2)\n ccadepth = self.number_of_generations(cca, self.Thing)\n generations1 = self.number_of_generations(cls1, cca)\n generations2 = self.number_of_generations(cls2, cca)\n return 2 * ccadepth / (generations1 + generations2 + 2 * ccadepth)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.individuals","title":"individuals(self, imported=False)
","text":"Returns an generator over all individuals.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def individuals(self, imported=False):\n \"\"\"Returns an generator over all individuals.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"individuals\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.is_defined","title":"is_defined(self, entity)
","text":"Returns true if the entity is a defined class.
Deprecated, use the is_defined
property of the classes (ThingClass subclasses) instead.
ontopy/ontology.py
def is_defined(self, entity):\n \"\"\"Returns true if the entity is a defined class.\n\n Deprecated, use the `is_defined` property of the classes\n (ThingClass subclasses) instead.\n \"\"\"\n warnings.warn(\n \"This method is deprecated. Use the `is_defined` property of \"\n \"the classes instad.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return hasattr(entity, \"equivalent_to\") and bool(entity.equivalent_to)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.is_individual","title":"is_individual(self, entity)
","text":"Returns true if entity is an individual.
Source code inontopy/ontology.py
def is_individual(self, entity):\n \"\"\"Returns true if entity is an individual.\"\"\"\n if isinstance(entity, str):\n entity = self.get_by_label(entity)\n return isinstance(entity, owlready2.Thing)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.load","title":"load(self, only_local=False, filename=None, format=None, reload=None, reload_if_newer=False, url_from_catalog=None, catalog_file='catalog-v001.xml', emmo_based=True, prefix=None, prefix_emmo=None, **kwargs)
","text":"Load the ontology.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.load--arguments","title":"Arguments","text":"bool
Whether to only read local files. This requires that you have appended the path to the ontology to owlready2.onto_path.
str
Path to file to load the ontology from. Defaults to base_iri
provided to get_ontology().
str
Format of filename
. Default is inferred from filename
extension.
bool
Whether to reload the ontology if it is already loaded.
bool
Whether to reload the ontology if the source has changed since last time it was loaded.
bool | None
Whether to use catalog file to resolve the location of base_iri
. If None, the catalog file is used if it exists in the same directory as filename
.
str
Name of Prot\u00e8g\u00e8 catalog file in the same folder as the ontology. This option is used together with only_local
and defaults to \"catalog-v001.xml\".
bool
Whether this is an EMMO-based ontology or not, default True
.
prefix: defaults to self.get_namespace.name if
bool, default None. If emmo_based is True it
defaults to True and sets the prefix of all imported ontologies with base_iri starting with 'http://emmo.info/emmo' to emmo
Kwargs
Additional keyword arguments are passed on to owlready2.Ontology.load().
Source code inontopy/ontology.py
def load( # pylint: disable=too-many-arguments,arguments-renamed\n self,\n only_local=False,\n filename=None,\n format=None, # pylint: disable=redefined-builtin\n reload=None,\n reload_if_newer=False,\n url_from_catalog=None,\n catalog_file=\"catalog-v001.xml\",\n emmo_based=True,\n prefix=None,\n prefix_emmo=None,\n **kwargs,\n):\n \"\"\"Load the ontology.\n\n Arguments\n ---------\n only_local: bool\n Whether to only read local files. This requires that you\n have appended the path to the ontology to owlready2.onto_path.\n filename: str\n Path to file to load the ontology from. Defaults to `base_iri`\n provided to get_ontology().\n format: str\n Format of `filename`. Default is inferred from `filename`\n extension.\n reload: bool\n Whether to reload the ontology if it is already loaded.\n reload_if_newer: bool\n Whether to reload the ontology if the source has changed since\n last time it was loaded.\n url_from_catalog: bool | None\n Whether to use catalog file to resolve the location of `base_iri`.\n If None, the catalog file is used if it exists in the same\n directory as `filename`.\n catalog_file: str\n Name of Prot\u00e8g\u00e8 catalog file in the same folder as the\n ontology. This option is used together with `only_local` and\n defaults to \"catalog-v001.xml\".\n emmo_based: bool\n Whether this is an EMMO-based ontology or not, default `True`.\n prefix: defaults to self.get_namespace.name if\n prefix_emmo: bool, default None. If emmo_based is True it\n defaults to True and sets the prefix of all imported ontologies\n with base_iri starting with 'http://emmo.info/emmo' to emmo\n kwargs:\n Additional keyword arguments are passed on to\n owlready2.Ontology.load().\n \"\"\"\n # TODO: make sure that `only_local` argument is respected...\n\n if self.loaded:\n return self\n self._load(\n only_local=only_local,\n filename=filename,\n format=format,\n reload=reload,\n reload_if_newer=reload_if_newer,\n url_from_catalog=url_from_catalog,\n catalog_file=catalog_file,\n **kwargs,\n )\n\n # Enable optimised search by get_by_label()\n if self._special_labels is None and emmo_based:\n top = self.world[\"http://www.w3.org/2002/07/owl#topObjectProperty\"]\n self._special_labels = {\n \"Thing\": owlready2.Thing,\n \"Nothing\": owlready2.Nothing,\n \"topObjectProperty\": top,\n \"owl:Thing\": owlready2.Thing,\n \"owl:Nothing\": owlready2.Nothing,\n \"owl:topObjectProperty\": top,\n }\n # set prefix if another prefix is desired\n # if we do this, shouldn't we make the name of all\n # entities of the given ontology to the same?\n if prefix:\n self.prefix = prefix\n else:\n self.prefix = self.name\n\n if emmo_based and prefix_emmo is None:\n prefix_emmo = True\n if prefix_emmo:\n self.set_common_prefix()\n\n return self\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_annotation_property","title":"new_annotation_property(self, name, parent)
","text":"Create and return new annotation property.
Parameters:
Name Type Description Defaultname
str
name of the annotation property
requiredparent
Union[owlready2.annotation.AnnotationPropertyClass, collections.abc.Iterable]
parent(s) of the annotation property
requiredReturns:
Type DescriptionAnnotationPropertyClass
the new annotation property.
Source code inontopy/ontology.py
def new_annotation_property(\n self, name: str, parent: Union[AnnotationPropertyClass, Iterable]\n) -> AnnotationPropertyClass:\n \"\"\"Create and return new annotation property.\n\n Args:\n name: name of the annotation property\n parent: parent(s) of the annotation property\n\n Returns:\n the new annotation property.\n \"\"\"\n return self.new_entity(name, parent, \"annotation_property\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_class","title":"new_class(self, name, parent)
","text":"Create and return new class.
Parameters:
Name Type Description Defaultname
str
name of the class
requiredparent
Union[owlready2.entity.ThingClass, collections.abc.Iterable]
parent(s) of the class
requiredReturns:
Type DescriptionThingClass
the new class.
Source code inontopy/ontology.py
def new_class(\n self, name: str, parent: Union[ThingClass, Iterable]\n) -> ThingClass:\n \"\"\"Create and return new class.\n\n Args:\n name: name of the class\n parent: parent(s) of the class\n\n Returns:\n the new class.\n \"\"\"\n return self.new_entity(name, parent, \"class\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_data_property","title":"new_data_property(self, name, parent)
","text":"Create and return new data property.
Parameters:
Name Type Description Defaultname
str
name of the data property
requiredparent
Union[owlready2.prop.DataPropertyClass, collections.abc.Iterable]
parent(s) of the data property
requiredReturns:
Type DescriptionDataPropertyClass
the new data property.
Source code inontopy/ontology.py
def new_data_property(\n self, name: str, parent: Union[DataPropertyClass, Iterable]\n) -> DataPropertyClass:\n \"\"\"Create and return new data property.\n\n Args:\n name: name of the data property\n parent: parent(s) of the data property\n\n Returns:\n the new data property.\n \"\"\"\n return self.new_entity(name, parent, \"data_property\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_entity","title":"new_entity(self, name, parent, entitytype='class', preflabel=None)
","text":"Create and return new entity
Parameters:
Name Type Description Defaultname
str
name of the entity
requiredparent
Union[owlready2.entity.ThingClass, owlready2.prop.ObjectPropertyClass, owlready2.prop.DataPropertyClass, owlready2.annotation.AnnotationPropertyClass, collections.abc.Iterable]
parent(s) of the entity
requiredentitytype
Union[str, owlready2.entity.ThingClass, owlready2.prop.ObjectPropertyClass, owlready2.prop.DataPropertyClass, owlready2.annotation.AnnotationPropertyClass]
type of the entity, default is 'class' (str) 'ThingClass' (owlready2 Python class). Other options are 'data_property', 'object_property', 'annotation_property' (strings) or the Python classes ObjectPropertyClass, DataPropertyClass and AnnotationProperty classes.
'class'
preflabel
Optional[str]
if given, add this as a skos:prefLabel annotation to the new entity. If None (default), name
will be added as prefLabel if skos:prefLabel is in the ontology and listed in self.label_annotations
. Set preflabel
to False, to avoid assigning a prefLabel.
None
Returns:
Type DescriptionUnion[owlready2.entity.ThingClass, owlready2.prop.ObjectPropertyClass, owlready2.prop.DataPropertyClass, owlready2.annotation.AnnotationPropertyClass]
the new entity.
Throws exception if name consists of more than one word, if type is not one of the allowed types, or if parent is not of the correct type. By default, the parent is Thing.
Source code inontopy/ontology.py
def new_entity(\n self,\n name: str,\n parent: Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n Iterable,\n ],\n entitytype: Optional[\n Union[\n str,\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]\n ] = \"class\",\n preflabel: Optional[str] = None,\n) -> Union[\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n]:\n \"\"\"Create and return new entity\n\n Args:\n name: name of the entity\n parent: parent(s) of the entity\n entitytype: type of the entity,\n default is 'class' (str) 'ThingClass' (owlready2 Python class).\n Other options\n are 'data_property', 'object_property',\n 'annotation_property' (strings) or the\n Python classes ObjectPropertyClass,\n DataPropertyClass and AnnotationProperty classes.\n preflabel: if given, add this as a skos:prefLabel annotation\n to the new entity. If None (default), `name` will\n be added as prefLabel if skos:prefLabel is in the ontology\n and listed in `self.label_annotations`. Set `preflabel` to\n False, to avoid assigning a prefLabel.\n\n Returns:\n the new entity.\n\n Throws exception if name consists of more than one word, if type is not\n one of the allowed types, or if parent is not of the correct type.\n By default, the parent is Thing.\n\n \"\"\"\n # pylint: disable=invalid-name\n if \" \" in name:\n raise LabelDefinitionError(\n f\"Error in label name definition '{name}': \"\n f\"Label consists of more than one word.\"\n )\n parents = tuple(parent) if isinstance(parent, Iterable) else (parent,)\n if entitytype == \"class\":\n parenttype = owlready2.ThingClass\n elif entitytype == \"data_property\":\n parenttype = owlready2.DataPropertyClass\n elif entitytype == \"object_property\":\n parenttype = owlready2.ObjectPropertyClass\n elif entitytype == \"annotation_property\":\n parenttype = owlready2.AnnotationPropertyClass\n elif entitytype in [\n ThingClass,\n ObjectPropertyClass,\n DataPropertyClass,\n AnnotationPropertyClass,\n ]:\n parenttype = entitytype\n else:\n raise EntityClassDefinitionError(\n f\"Error in entity type definition: \"\n f\"'{entitytype}' is not a valid entity type.\"\n )\n for thing in parents:\n if not isinstance(thing, parenttype):\n raise EntityClassDefinitionError(\n f\"Error in parent definition: \"\n f\"'{thing}' is not an {parenttype}.\"\n )\n\n with self:\n entity = types.new_class(name, parents)\n\n preflabel_iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n if preflabel:\n if not self.world[preflabel_iri]:\n pref_label = self.new_annotation_property(\n \"prefLabel\",\n parent=[owlready2.AnnotationProperty],\n )\n pref_label.iri = preflabel_iri\n entity.prefLabel = english(preflabel)\n elif (\n preflabel is None\n and preflabel_iri in self.label_annotations\n and self.world[preflabel_iri]\n ):\n entity.prefLabel = english(name)\n\n return entity\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.new_object_property","title":"new_object_property(self, name, parent)
","text":"Create and return new object property.
Parameters:
Name Type Description Defaultname
str
name of the object property
requiredparent
Union[owlready2.prop.ObjectPropertyClass, collections.abc.Iterable]
parent(s) of the object property
requiredReturns:
Type DescriptionObjectPropertyClass
the new object property.
Source code inontopy/ontology.py
def new_object_property(\n self, name: str, parent: Union[ObjectPropertyClass, Iterable]\n) -> ObjectPropertyClass:\n \"\"\"Create and return new object property.\n\n Args:\n name: name of the object property\n parent: parent(s) of the object property\n\n Returns:\n the new object property.\n \"\"\"\n return self.new_entity(name, parent, \"object_property\")\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.number_of_generations","title":"number_of_generations(self, descendant, ancestor)
","text":"Return shortest distance from ancestor to descendant
Source code inontopy/ontology.py
def number_of_generations(self, descendant, ancestor):\n \"\"\"Return shortest distance from ancestor to descendant\"\"\"\n if ancestor not in descendant.ancestors():\n raise ValueError(\"Descendant is not a descendant of ancestor\")\n return self._number_of_generations(descendant, ancestor, 0)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.object_properties","title":"object_properties(self, imported=False)
","text":"Returns an generator over all object_properties.
Parameters:
Name Type Description Defaultimported
if True
, entities in imported ontologies are also returned.
False
Source code in ontopy/ontology.py
def object_properties(self, imported=False):\n \"\"\"Returns an generator over all object_properties.\n\n Arguments:\n imported: if `True`, entities in imported ontologies\n are also returned.\n \"\"\"\n return self._entities(\"object_properties\", imported=imported)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.remove_label_annotation","title":"remove_label_annotation(self, iri)
","text":"Removes label annotation used by get_by_label().
Source code inontopy/ontology.py
def remove_label_annotation(self, iri):\n \"\"\"Removes label annotation used by get_by_label().\"\"\"\n warnings.warn(\n \"Ontology.remove_label_annotations() is deprecated. \"\n \"Direct modify the `label_annotations` attribute instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n if hasattr(iri, \"iri\"):\n iri = iri.iri\n try:\n self.label_annotations.remove(iri)\n except ValueError:\n pass\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.rename_entities","title":"rename_entities(self, annotations=('prefLabel', 'label', 'altLabel'))
","text":"Set name
of all entities to the first non-empty annotation in annotations
.
Warning, this method changes all IRIs in the ontology. However, it may be useful to make the ontology more readable and to work with it together with a triple store.
Source code inontopy/ontology.py
def rename_entities(\n self,\n annotations=(\"prefLabel\", \"label\", \"altLabel\"),\n):\n \"\"\"Set `name` of all entities to the first non-empty annotation in\n `annotations`.\n\n Warning, this method changes all IRIs in the ontology. However,\n it may be useful to make the ontology more readable and to work\n with it together with a triple store.\n \"\"\"\n for entity in self.get_entities():\n for annotation in annotations:\n if hasattr(entity, annotation):\n name = getattr(entity, annotation).first()\n if name:\n entity.name = name\n break\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.save","title":"save(self, filename=None, format=None, dir='.', mkdir=False, overwrite=False, recursive=False, squash=False, write_catalog_file=False, append_catalog=False, catalog_file='catalog-v001.xml', **kwargs)
","text":"Writes the ontology to file.
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.save--parameters","title":"Parameters","text":"None | str | Path
Name of file to write to. If None, it defaults to the name of the ontology with format
as file extension.
str
Output format. The default is to infer it from filename
.
str | Path
If filename
is a relative path, it is a relative path to dir
.
bool
Whether to create output directory if it does not exists.
bool
If true and filename
exists, remove the existing file before saving. The default is to append to an existing ontology.
bool
Whether to save imported ontologies recursively. This is commonly combined with filename=None
, dir
and mkdir
. Note that depending on the structure of the ontology and all imports the ontology might end up in a subdirectory. If filename is given, the ontology is saved to the given directory. The path to the final location is returned.
bool
If true, rdflib will be used to save the current ontology together with all its sub-ontologies into filename
. It makes no sense to combine this with recursive
.
bool
Whether to also write a catalog file to disk.
bool
Whether to append to an existing catalog file.
str | Path
Name of catalog file. If not an absolute path, it is prepended to dir
.
The path to the saved ontology.\n
Source code in ontopy/ontology.py
def save(\n self,\n filename=None,\n format=None,\n dir=\".\",\n mkdir=False,\n overwrite=False,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n append_catalog=False,\n catalog_file=\"catalog-v001.xml\",\n **kwargs,\n) -> Path:\n \"\"\"Writes the ontology to file.\n\n Parameters\n ----------\n filename: None | str | Path\n Name of file to write to. If None, it defaults to the name\n of the ontology with `format` as file extension.\n format: str\n Output format. The default is to infer it from `filename`.\n dir: str | Path\n If `filename` is a relative path, it is a relative path to `dir`.\n mkdir: bool\n Whether to create output directory if it does not exists.\n owerwrite: bool\n If true and `filename` exists, remove the existing file before\n saving. The default is to append to an existing ontology.\n recursive: bool\n Whether to save imported ontologies recursively. This is\n commonly combined with `filename=None`, `dir` and `mkdir`.\n Note that depending on the structure of the ontology and\n all imports the ontology might end up in a subdirectory.\n If filename is given, the ontology is saved to the given\n directory.\n The path to the final location is returned.\n squash: bool\n If true, rdflib will be used to save the current ontology\n together with all its sub-ontologies into `filename`.\n It makes no sense to combine this with `recursive`.\n write_catalog_file: bool\n Whether to also write a catalog file to disk.\n append_catalog: bool\n Whether to append to an existing catalog file.\n catalog_file: str | Path\n Name of catalog file. If not an absolute path, it is prepended\n to `dir`.\n\n Returns\n --------\n The path to the saved ontology.\n \"\"\"\n # pylint: disable=redefined-builtin,too-many-arguments\n # pylint: disable=too-many-statements,too-many-branches\n # pylint: disable=too-many-locals,arguments-renamed,invalid-name\n\n if not _validate_installed_version(\n package=\"rdflib\", min_version=\"6.0.0\"\n ) and format == FMAP.get(\"ttl\", \"\"):\n from rdflib import ( # pylint: disable=import-outside-toplevel\n __version__ as __rdflib_version__,\n )\n\n warnings.warn(\n IncompatibleVersion(\n \"To correctly convert to Turtle format, rdflib must be \"\n \"version 6.0.0 or greater, however, the detected rdflib \"\n \"version used by your Python interpreter is \"\n f\"{__rdflib_version__!r}. For more information see the \"\n \"'Known issues' section of the README.\"\n )\n )\n revmap = {value: key for key, value in FMAP.items()}\n if filename is None:\n if format:\n fmt = revmap.get(format, format)\n file = f\"{self.name}.{fmt}\"\n else:\n raise TypeError(\"`filename` and `format` cannot both be None.\")\n else:\n file = filename\n filepath = os.path.join(\n dir, file if isinstance(file, (str, Path)) else file.name\n )\n returnpath = filepath\n\n dir = Path(filepath).resolve().parent\n\n if mkdir:\n outdir = Path(filepath).parent.resolve()\n if not outdir.exists():\n outdir.mkdir(parents=True)\n\n if not format:\n format = guess_format(file, fmap=FMAP)\n fmt = revmap.get(format, format)\n\n if overwrite and os.path.exists(filepath):\n os.remove(filepath)\n\n if recursive:\n if squash:\n raise ValueError(\n \"`recursive` and `squash` should not both be true\"\n )\n layout = directory_layout(self)\n if filename:\n layout[self] = file.rstrip(f\".{fmt}\")\n # Update path to where the ontology is saved\n # Note that filename should include format\n # when given\n returnpath = Path(dir) / f\"{layout[self]}.{fmt}\"\n for onto, path in layout.items():\n fname = Path(dir) / f\"{path}.{fmt}\"\n onto.save(\n filename=fname,\n format=format,\n dir=dir,\n mkdir=mkdir,\n overwrite=overwrite,\n recursive=False,\n squash=False,\n write_catalog_file=False,\n **kwargs,\n )\n\n if write_catalog_file:\n catalog_files = set()\n irimap = {}\n for onto, path in layout.items():\n irimap[onto.get_version(as_iri=True)] = (\n f\"{dir}/{path}.{fmt}\"\n )\n catalog_files.add(Path(path).parent / catalog_file)\n\n for catfile in catalog_files:\n write_catalog(\n irimap.copy(),\n output=catfile,\n directory=dir,\n append=append_catalog,\n )\n elif squash:\n URIRef, RDF, OWL = rdflib.URIRef, rdflib.RDF, rdflib.OWL\n\n # Make a copy of the owlready2 graph object to not mess with\n # owlready2 internals\n graph = rdflib.Graph()\n for triple in self.world.as_rdflib_graph():\n graph.add(triple)\n\n # Add common namespaces unknown to rdflib\n extra_namespaces = [\n (\"\", self.base_iri),\n (\"swrl\", \"http://www.w3.org/2003/11/swrl#\"),\n (\"bibo\", \"http://purl.org/ontology/bibo/\"),\n ]\n for prefix, iri in extra_namespaces:\n graph.namespace_manager.bind(\n prefix, rdflib.Namespace(iri), override=False\n )\n\n # Remove all ontology-declarations in the graph that are\n # not the current ontology.\n for s, _, _ in graph.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n if str(s).rstrip(\"/#\") != self.base_iri.rstrip(\"/#\"):\n for (\n _,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (s, None, None)\n ):\n graph.remove((s, p, o))\n graph.remove((s, OWL.imports, None))\n\n # Insert correct IRI of the ontology\n if self.iri:\n base_iri = URIRef(self.base_iri)\n for s, p, o in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((URIRef(self.iri), p, o))\n\n graph.serialize(destination=filepath, format=format)\n elif format in OWLREADY2_FORMATS:\n super().save(file=filepath, format=fmt, **kwargs)\n else:\n # The try-finally clause is needed for cleanup and because\n # we have to provide delete=False to NamedTemporaryFile\n # since Windows does not allow to reopen an already open\n # file.\n try:\n with tempfile.NamedTemporaryFile(\n suffix=\".owl\", delete=False\n ) as handle:\n tmpfile = handle.name\n super().save(tmpfile, format=\"ntriples\", **kwargs)\n graph = rdflib.Graph()\n graph.parse(tmpfile, format=\"ntriples\")\n graph.namespace_manager.bind(\n \"\", rdflib.Namespace(self.base_iri)\n )\n if self.iri:\n base_iri = rdflib.URIRef(self.base_iri)\n for (\n s,\n p,\n o,\n ) in graph.triples( # pylint: disable=not-an-iterable\n (base_iri, None, None)\n ):\n graph.remove((s, p, o))\n graph.add((rdflib.URIRef(self.iri), p, o))\n graph.serialize(destination=filepath, format=format)\n finally:\n os.remove(tmpfile)\n\n if write_catalog_file and not recursive:\n write_catalog(\n {self.get_version(as_iri=True): filepath},\n output=catalog_file,\n directory=dir,\n append=append_catalog,\n )\n return Path(returnpath)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.set_common_prefix","title":"set_common_prefix(self, iri_base='http://emmo.info/emmo', prefix='emmo', visited=None)
","text":"Set a common prefix for all imported ontologies with the same first part of the base_iri.
Parameters:
Name Type Description Defaultiri_base
str
The start of the base_iri to look for. Defaults to the emmo base_iri http://emmo.info/emmo
'http://emmo.info/emmo'
prefix
str
the desired prefix. Defaults to emmo.
'emmo'
visited
Optional[Set]
Ontologies to skip. Only intended for internal use.
None
Source code in ontopy/ontology.py
def set_common_prefix(\n self,\n iri_base: str = \"http://emmo.info/emmo\",\n prefix: str = \"emmo\",\n visited: \"Optional[Set]\" = None,\n) -> None:\n \"\"\"Set a common prefix for all imported ontologies\n with the same first part of the base_iri.\n\n Args:\n iri_base: The start of the base_iri to look for. Defaults to\n the emmo base_iri http://emmo.info/emmo\n prefix: the desired prefix. Defaults to emmo.\n visited: Ontologies to skip. Only intended for internal use.\n \"\"\"\n if visited is None:\n visited = set()\n if self.base_iri.startswith(iri_base):\n self.prefix = prefix\n for onto in self.imported_ontologies:\n if not onto in visited:\n visited.add(onto)\n onto.set_common_prefix(\n iri_base=iri_base, prefix=prefix, visited=visited\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.set_default_label_annotations","title":"set_default_label_annotations(self)
","text":"Sets the default label annotations.
Source code inontopy/ontology.py
def set_default_label_annotations(self):\n \"\"\"Sets the default label annotations.\"\"\"\n warnings.warn(\n \"Ontology.set_default_label_annotations() is deprecated. \"\n \"Default label annotations are set by Ontology.__init__(). \",\n DeprecationWarning,\n stacklevel=2,\n )\n self.label_annotations = DEFAULT_LABEL_ANNOTATIONS[:]\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.set_version","title":"set_version(self, version=None, version_iri=None)
","text":"Assign version to ontology by asigning owl:versionIRI.
If version
but not version_iri
is provided, the version IRI will be the combination of base_iri
and version
.
ontopy/ontology.py
def set_version(self, version=None, version_iri=None):\n \"\"\"Assign version to ontology by asigning owl:versionIRI.\n\n If `version` but not `version_iri` is provided, the version\n IRI will be the combination of `base_iri` and `version`.\n \"\"\"\n _version_iri = \"http://www.w3.org/2002/07/owl#versionIRI\"\n version_iri_storid = self.world._abbreviate(_version_iri)\n if self._has_obj_triple_spo( # pylint: disable=unexpected-keyword-arg\n # For some reason _has_obj_triples_spo exists in both\n # owlready2.namespace.Namespace (with arguments subject/predicate)\n # and in owlready2.triplelite._GraphManager (with arguments s/p)\n # owlready2.Ontology inherits from Namespace directly\n # and pylint checks that.\n # It actually accesses the one in triplelite.\n # subject=self.storid, predicate=version_iri_storid\n s=self.storid,\n p=version_iri_storid,\n ):\n self._del_obj_triple_spo(s=self.storid, p=version_iri_storid)\n\n if not version_iri:\n if not version:\n raise TypeError(\n \"Either `version` or `version_iri` must be provided\"\n )\n head, tail = self.base_iri.rstrip(\"#/\").rsplit(\"/\", 1)\n version_iri = \"/\".join([head, version, tail])\n\n self._add_obj_triple_spo(\n s=self.storid,\n p=self.world._abbreviate(_version_iri),\n o=self.world._abbreviate(version_iri),\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.sync_attributes","title":"sync_attributes(self, name_policy=None, name_prefix='', class_docstring='comment', sync_imported=False)
","text":"This method is intended to be called after you have added new classes (typically via Python) to make sure that attributes like label
and comments
are defined.
If a class, object property, data property or annotation property in the current ontology has no label, the name of the corresponding Python class will be assigned as label.
If a class, object property, data property or annotation property has no comment, it will be assigned the docstring of the corresponding Python class.
name_policy
specify wether and how the names in the ontology should be updated. Valid values are: None not changed \"uuid\" name_prefix
followed by a global unique id (UUID). If the name is already valid accoridng to this standard it will not be regenerated. \"sequential\" name_prefix
followed a sequantial number. EMMO conventions imply name_policy=='uuid'
.
If sync_imported
is true, all imported ontologies are also updated.
The class_docstring
argument specifies the annotation that class docstrings are mapped to. Defaults to \"comment\".
ontopy/ontology.py
def sync_attributes( # pylint: disable=too-many-branches\n self,\n name_policy=None,\n name_prefix=\"\",\n class_docstring=\"comment\",\n sync_imported=False,\n):\n \"\"\"This method is intended to be called after you have added new\n classes (typically via Python) to make sure that attributes like\n `label` and `comments` are defined.\n\n If a class, object property, data property or annotation\n property in the current ontology has no label, the name of\n the corresponding Python class will be assigned as label.\n\n If a class, object property, data property or annotation\n property has no comment, it will be assigned the docstring of\n the corresponding Python class.\n\n `name_policy` specify wether and how the names in the ontology\n should be updated. Valid values are:\n None not changed\n \"uuid\" `name_prefix` followed by a global unique id (UUID).\n If the name is already valid accoridng to this standard\n it will not be regenerated.\n \"sequential\" `name_prefix` followed a sequantial number.\n EMMO conventions imply ``name_policy=='uuid'``.\n\n If `sync_imported` is true, all imported ontologies are also\n updated.\n\n The `class_docstring` argument specifies the annotation that\n class docstrings are mapped to. Defaults to \"comment\".\n \"\"\"\n for cls in itertools.chain(\n self.classes(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n ):\n if not hasattr(cls, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=unused-variable\n class prefLabel(owlready2.label):\n pass\n\n cls.prefLabel = [locstr(cls.__name__, lang=\"en\")]\n elif not cls.prefLabel:\n cls.prefLabel.append(locstr(cls.__name__, lang=\"en\"))\n if class_docstring and hasattr(cls, \"__doc__\") and cls.__doc__:\n getattr(cls, class_docstring).append(\n locstr(inspect.cleandoc(cls.__doc__), lang=\"en\")\n )\n\n for ind in self.individuals():\n if not hasattr(ind, \"prefLabel\"):\n # no prefLabel - create new annotation property..\n with self:\n # pylint: disable=invalid-name,missing-class-docstring\n # pylint: disable=function-redefined\n class prefLabel(owlready2.label):\n iri = \"http://www.w3.org/2004/02/skos/core#prefLabel\"\n\n ind.prefLabel = [locstr(ind.name, lang=\"en\")]\n elif not ind.prefLabel:\n ind.prefLabel.append(locstr(ind.name, lang=\"en\"))\n\n chain = itertools.chain(\n self.classes(),\n self.individuals(),\n self.object_properties(),\n self.data_properties(),\n self.annotation_properties(),\n )\n if name_policy == \"uuid\":\n for obj in chain:\n try:\n # Passing the following means that the name is valid\n # and need not be regenerated.\n if not obj.name.startswith(name_prefix):\n raise ValueError\n uuid.UUID(obj.name.lstrip(name_prefix), version=5)\n except ValueError:\n obj.name = name_prefix + str(\n uuid.uuid5(uuid.NAMESPACE_DNS, obj.name)\n )\n elif name_policy == \"sequential\":\n for obj in chain:\n counter = 0\n while f\"{self.base_iri}{name_prefix}{counter}\" in self:\n counter += 1\n obj.name = f\"{name_prefix}{counter}\"\n elif name_policy is not None:\n raise TypeError(f\"invalid name_policy: {name_policy!r}\")\n\n if sync_imported:\n for onto in self.imported_ontologies:\n onto.sync_attributes()\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.sync_python_names","title":"sync_python_names(self, annotations=('prefLabel', 'label', 'altLabel'))
","text":"Update the python_name
attribute of all properties.
The python_name attribute will be set to the first non-empty annotation in the sequence of annotations in annotations
for the property.
ontopy/ontology.py
def sync_python_names(self, annotations=(\"prefLabel\", \"label\", \"altLabel\")):\n \"\"\"Update the `python_name` attribute of all properties.\n\n The python_name attribute will be set to the first non-empty\n annotation in the sequence of annotations in `annotations` for\n the property.\n \"\"\"\n\n def update(gen):\n for prop in gen:\n for annotation in annotations:\n if hasattr(prop, annotation) and getattr(prop, annotation):\n prop.python_name = getattr(prop, annotation).first()\n break\n\n update(\n self.get_entities(\n classes=False,\n individuals=False,\n object_properties=False,\n data_properties=False,\n )\n )\n update(\n self.get_entities(\n classes=False, individuals=False, annotation_properties=False\n )\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.Ontology.sync_reasoner","title":"sync_reasoner(self, reasoner='HermiT', include_imported=False, **kwargs)
","text":"Update current ontology by running the given reasoner.
Supported values for reasoner
are 'HermiT' (default), Pellet and 'FaCT++'.
If include_imported
is true, the reasoner will also reason over imported ontologies. Note that this may be very slow.
Keyword arguments are passed to the underlying owlready2 function.
Source code inontopy/ontology.py
def sync_reasoner(\n self, reasoner=\"HermiT\", include_imported=False, **kwargs\n):\n \"\"\"Update current ontology by running the given reasoner.\n\n Supported values for `reasoner` are 'HermiT' (default), Pellet\n and 'FaCT++'.\n\n If `include_imported` is true, the reasoner will also reason\n over imported ontologies. Note that this may be **very** slow.\n\n Keyword arguments are passed to the underlying owlready2 function.\n \"\"\"\n # pylint: disable=too-many-branches\n\n removed_equivalent = defaultdict(list)\n removed_subclasses = defaultdict(list)\n\n if reasoner == \"FaCT++\":\n sync = sync_reasoner_factpp\n elif reasoner == \"Pellet\":\n sync = owlready2.sync_reasoner_pellet\n elif reasoner == \"HermiT\":\n sync = owlready2.sync_reasoner_hermit\n\n # Remove custom data propertyes, otherwise HermiT will crash\n datatype_iri = \"http://www.w3.org/2000/01/rdf-schema#Datatype\"\n\n for cls in self.classes(imported=include_imported):\n remove_eq = []\n for i, r in enumerate(cls.equivalent_to):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_eq.append(i)\n removed_equivalent[cls].append(r)\n for i in reversed(remove_eq):\n del cls.equivalent_to[i]\n\n remove_subcls = []\n for i, r in enumerate(cls.is_a):\n if isinstance(r, owlready2.Restriction):\n if (\n hasattr(r.value.__class__, \"iri\")\n and r.value.__class__.iri == datatype_iri\n ):\n remove_subcls.append(i)\n removed_subclasses[cls].append(r)\n for i in reversed(remove_subcls):\n del cls.is_a[i]\n\n else:\n raise ValueError(\n f\"Unknown reasoner '{reasoner}'. Supported reasoners \"\n \"are 'Pellet', 'HermiT' and 'FaCT++'.\"\n )\n\n # For some reason we must visit all entities once before running\n # the reasoner...\n list(self.get_entities())\n\n with self:\n if include_imported:\n sync(self.world, **kwargs)\n else:\n sync(self, **kwargs)\n\n # Restore removed custom data properties\n for cls, eqs in removed_equivalent.items():\n cls.extend(eqs)\n for cls, subcls in removed_subclasses.items():\n cls.extend(subcls)\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.World","title":" World (World)
","text":"A subclass of owlready2.World.
Source code inontopy/ontology.py
class World(owlready2.World):\n \"\"\"A subclass of owlready2.World.\"\"\"\n\n def __init__(self, *args, **kwargs):\n # Caches stored in the world\n self._cached_catalogs = {} # maps url to (mtime, iris, dirs)\n self._iri_mappings = {} # all iri mappings loaded so far\n super().__init__(*args, **kwargs)\n\n def get_ontology(\n self,\n base_iri: str = \"emmo-inferred\",\n OntologyClass: \"owlready2.Ontology\" = None,\n label_annotations: \"Sequence\" = None,\n ) -> \"Ontology\":\n # pylint: disable=too-many-branches\n \"\"\"Returns a new Ontology from `base_iri`.\n\n Arguments:\n base_iri: The base IRI of the ontology. May be one of:\n - valid URL (possible excluding final .owl or .ttl)\n - file name (possible excluding final .owl or .ttl)\n - \"emmo\": load latest version of asserted EMMO\n - \"emmo-inferred\": load latest version of inferred EMMO\n (default)\n - \"emmo-development\": load latest inferred development\n version of EMMO. Until first stable release\n emmo-inferred and emmo-development will be the same.\n OntologyClass: If given and `base_iri` doesn't correspond\n to an existing ontology, a new ontology is created of\n this Ontology subclass. Defaults to `ontopy.Ontology`.\n label_annotations: Sequence of label IRIs used for accessing\n entities in the ontology given that they are in the ontology.\n Label IRIs not in the ontology will need to be added to\n ontologies in order to be accessible.\n Defaults to DEFAULT_LABEL_ANNOTATIONS if set to None.\n \"\"\"\n base_iri = base_iri.as_uri() if isinstance(base_iri, Path) else base_iri\n\n if base_iri == \"emmo\":\n base_iri = (\n \"http://emmo-repo.github.io/versions/1.0.0-beta4/emmo.ttl\"\n )\n elif base_iri == \"emmo-inferred\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta4/\"\n \"emmo-inferred.ttl\"\n )\n elif base_iri == \"emmo-development\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta5/\"\n \"emmo-inferred.ttl\"\n )\n\n if base_iri in self.ontologies:\n onto = self.ontologies[base_iri]\n elif base_iri + \"#\" in self.ontologies:\n onto = self.ontologies[base_iri + \"#\"]\n elif base_iri + \"/\" in self.ontologies:\n onto = self.ontologies[base_iri + \"/\"]\n else:\n if os.path.exists(base_iri):\n iri = os.path.abspath(base_iri)\n elif os.path.exists(base_iri + \".ttl\"):\n iri = os.path.abspath(base_iri + \".ttl\")\n elif os.path.exists(base_iri + \".owl\"):\n iri = os.path.abspath(base_iri + \".owl\")\n else:\n iri = base_iri\n\n if iri[-1] not in \"/#\":\n iri += \"#\"\n\n if OntologyClass is None:\n OntologyClass = Ontology\n\n onto = OntologyClass(self, iri)\n\n if label_annotations:\n onto.label_annotations = list(label_annotations)\n\n return onto\n\n def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n ):\n # pylint: disable=invalid-name\n \"\"\"Returns all triples unabbreviated.\n\n If any of the `subject`, `predicate` or `obj` arguments are given,\n only matching triples will be returned.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.World.get_ontology","title":"get_ontology(self, base_iri='emmo-inferred', OntologyClass=None, label_annotations=None)
","text":"Returns a new Ontology from base_iri
.
Parameters:
Name Type Description Defaultbase_iri
str
The base IRI of the ontology. May be one of: - valid URL (possible excluding final .owl or .ttl) - file name (possible excluding final .owl or .ttl) - \"emmo\": load latest version of asserted EMMO - \"emmo-inferred\": load latest version of inferred EMMO (default) - \"emmo-development\": load latest inferred development version of EMMO. Until first stable release emmo-inferred and emmo-development will be the same.
'emmo-inferred'
OntologyClass
owlready2.Ontology
If given and base_iri
doesn't correspond to an existing ontology, a new ontology is created of this Ontology subclass. Defaults to ontopy.Ontology
.
None
label_annotations
Sequence
Sequence of label IRIs used for accessing entities in the ontology given that they are in the ontology. Label IRIs not in the ontology will need to be added to ontologies in order to be accessible. Defaults to DEFAULT_LABEL_ANNOTATIONS if set to None.
None
Source code in ontopy/ontology.py
def get_ontology(\n self,\n base_iri: str = \"emmo-inferred\",\n OntologyClass: \"owlready2.Ontology\" = None,\n label_annotations: \"Sequence\" = None,\n) -> \"Ontology\":\n # pylint: disable=too-many-branches\n \"\"\"Returns a new Ontology from `base_iri`.\n\n Arguments:\n base_iri: The base IRI of the ontology. May be one of:\n - valid URL (possible excluding final .owl or .ttl)\n - file name (possible excluding final .owl or .ttl)\n - \"emmo\": load latest version of asserted EMMO\n - \"emmo-inferred\": load latest version of inferred EMMO\n (default)\n - \"emmo-development\": load latest inferred development\n version of EMMO. Until first stable release\n emmo-inferred and emmo-development will be the same.\n OntologyClass: If given and `base_iri` doesn't correspond\n to an existing ontology, a new ontology is created of\n this Ontology subclass. Defaults to `ontopy.Ontology`.\n label_annotations: Sequence of label IRIs used for accessing\n entities in the ontology given that they are in the ontology.\n Label IRIs not in the ontology will need to be added to\n ontologies in order to be accessible.\n Defaults to DEFAULT_LABEL_ANNOTATIONS if set to None.\n \"\"\"\n base_iri = base_iri.as_uri() if isinstance(base_iri, Path) else base_iri\n\n if base_iri == \"emmo\":\n base_iri = (\n \"http://emmo-repo.github.io/versions/1.0.0-beta4/emmo.ttl\"\n )\n elif base_iri == \"emmo-inferred\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta4/\"\n \"emmo-inferred.ttl\"\n )\n elif base_iri == \"emmo-development\":\n base_iri = (\n \"https://emmo-repo.github.io/versions/1.0.0-beta5/\"\n \"emmo-inferred.ttl\"\n )\n\n if base_iri in self.ontologies:\n onto = self.ontologies[base_iri]\n elif base_iri + \"#\" in self.ontologies:\n onto = self.ontologies[base_iri + \"#\"]\n elif base_iri + \"/\" in self.ontologies:\n onto = self.ontologies[base_iri + \"/\"]\n else:\n if os.path.exists(base_iri):\n iri = os.path.abspath(base_iri)\n elif os.path.exists(base_iri + \".ttl\"):\n iri = os.path.abspath(base_iri + \".ttl\")\n elif os.path.exists(base_iri + \".owl\"):\n iri = os.path.abspath(base_iri + \".owl\")\n else:\n iri = base_iri\n\n if iri[-1] not in \"/#\":\n iri += \"#\"\n\n if OntologyClass is None:\n OntologyClass = Ontology\n\n onto = OntologyClass(self, iri)\n\n if label_annotations:\n onto.label_annotations = list(label_annotations)\n\n return onto\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.World.get_unabbreviated_triples","title":"get_unabbreviated_triples(self, subject=None, predicate=None, obj=None, blank=None)
","text":"Returns all triples unabbreviated.
If any of the subject
, predicate
or obj
arguments are given, only matching triples will be returned.
If blank
is given, it will be used to represent blank nodes.
ontopy/ontology.py
def get_unabbreviated_triples(\n self, subject=None, predicate=None, obj=None, blank=None\n):\n # pylint: disable=invalid-name\n \"\"\"Returns all triples unabbreviated.\n\n If any of the `subject`, `predicate` or `obj` arguments are given,\n only matching triples will be returned.\n\n If `blank` is given, it will be used to represent blank nodes.\n \"\"\"\n return _get_unabbreviated_triples(\n self, subject=subject, predicate=predicate, obj=obj, blank=blank\n )\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.flatten","title":"flatten(items)
","text":"Yield items from any nested iterable.
Source code inontopy/ontology.py
def flatten(items):\n \"\"\"Yield items from any nested iterable.\"\"\"\n for item in items:\n if isinstance(item, Iterable) and not isinstance(item, (str, bytes)):\n yield from flatten(item)\n else:\n yield item\n
"},{"location":"api_reference/ontopy/ontology/#ontopy.ontology.get_ontology","title":"get_ontology(*args, **kwargs)
","text":"Returns a new Ontology from base_iri
.
This is a convenient function for calling World.get_ontology().
Source code inontopy/ontology.py
def get_ontology(*args, **kwargs):\n \"\"\"Returns a new Ontology from `base_iri`.\n\n This is a convenient function for calling World.get_ontology().\"\"\"\n return World().get_ontology(*args, **kwargs)\n
"},{"location":"api_reference/ontopy/patch/","title":"patch","text":"This module injects some additional methods into owlready2 classes.
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.disjoint_with","title":"disjoint_with(self, reduce=False)
","text":"Returns a generator with all classes that are disjoint with self
.
If reduce
is True
, all classes that are a descendant of another class will be excluded.
ontopy/patch.py
def disjoint_with(self, reduce=False):\n \"\"\"Returns a generator with all classes that are disjoint with `self`.\n\n If `reduce` is `True`, all classes that are a descendant of another class\n will be excluded.\n \"\"\"\n if reduce:\n disjoint_set = set(self.disjoint_with())\n for entity in disjoint_set.copy():\n disjoint_set.difference_update(\n entity.descendants(include_self=False)\n )\n yield from disjoint_set\n else:\n for disjoint in self.disjoints():\n for entity in disjoint.entities:\n if entity is not self:\n yield entity\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_annotations","title":"get_annotations(self, all=False, imported=True)
","text":"Returns a dict with non-empty annotations.
If all
is True
, also annotations with no value are included.
If imported
is True
, also include annotations defined in imported ontologies.
ontopy/patch.py
def get_annotations(\n self, all=False, imported=True\n): # pylint: disable=redefined-builtin\n \"\"\"Returns a dict with non-empty annotations.\n\n If `all` is `True`, also annotations with no value are included.\n\n If `imported` is `True`, also include annotations defined in imported\n ontologies.\n \"\"\"\n onto = self.namespace.ontology\n\n def extend(key, values):\n \"\"\"Extend annotations with a sequence of values.\"\"\"\n if key in annotations:\n annotations[key].extend(values)\n else:\n annotations[key] = values\n\n annotations = {\n str(get_preferred_label(a)): a._get_values_for_class(self)\n for a in onto.annotation_properties(imported=imported)\n }\n extend(\"comment\", self.comment)\n extend(\"label\", self.label)\n if all:\n return annotations\n return {key: value for key, value in annotations.items() if value}\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_indirect_is_a","title":"get_indirect_is_a(self, skip_classes=True)
","text":"Returns the set of all isSubclassOf relations of self and its ancestors.
If skip_classes
is True
, indirect classes are not included in the returned set.
ontopy/patch.py
def get_indirect_is_a(self, skip_classes=True):\n \"\"\"Returns the set of all isSubclassOf relations of self and its ancestors.\n\n If `skip_classes` is `True`, indirect classes are not included in the\n returned set.\n \"\"\"\n subclass_relations = set()\n for entity in reversed(self.mro()):\n for attr in \"is_a\", \"equivalent_to\":\n if hasattr(entity, attr):\n lst = getattr(entity, attr)\n if skip_classes:\n subclass_relations.update(\n r\n for r in lst\n if not isinstance(r, owlready2.ThingClass)\n )\n else:\n subclass_relations.update(lst)\n\n subclass_relations.update(self.is_a)\n return subclass_relations\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_parents","title":"get_parents(self, strict=False)
","text":"Returns a list of all parents.
If strict
is True
, parents that are parents of other parents are excluded.
ontopy/patch.py
def get_parents(self, strict=False):\n \"\"\"Returns a list of all parents.\n\n If `strict` is `True`, parents that are parents of other parents are\n excluded.\n \"\"\"\n if strict:\n parents = self.get_parents()\n for entity in parents.copy():\n parents.difference_update(entity.ancestors(include_self=False))\n return parents\n if isinstance(self, ThingClass):\n return {cls for cls in self.is_a if isinstance(cls, ThingClass)}\n if isinstance(self, owlready2.ObjectPropertyClass):\n return {\n cls\n for cls in self.is_a\n if isinstance(cls, owlready2.ObjectPropertyClass)\n }\n raise EMMOntoPyException(\n \"self has no parents - this should not be possible!\"\n )\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_preferred_label","title":"get_preferred_label(self)
","text":"Returns the preferred label as a string (not list).
The following heuristics is used: - if prefLabel annotation property exists, returns the first prefLabel - if label annotation property exists, returns the first label - otherwise return the name
Source code inontopy/patch.py
def get_preferred_label(self):\n \"\"\"Returns the preferred label as a string (not list).\n\n The following heuristics is used:\n - if prefLabel annotation property exists, returns the first prefLabel\n - if label annotation property exists, returns the first label\n - otherwise return the name\n \"\"\"\n if hasattr(self, \"prefLabel\") and self.prefLabel:\n return self.prefLabel[0]\n if hasattr(self, \"label\") and self.label:\n return self.label.first()\n return self.name\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.get_typename","title":"get_typename(self)
","text":"Get restriction type label/name.
Source code inontopy/patch.py
def get_typename(self):\n \"\"\"Get restriction type label/name.\"\"\"\n return owlready2.class_construct._restriction_type_2_label[self.type]\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.has","title":"has(self, name)
","text":"Returns true if name
ontopy/patch.py
def has(self, name):\n \"\"\"Returns true if `name`\"\"\"\n return name in set(self.keys())\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.items","title":"items(self)
","text":"Return a generator over annotation property (name, value_list) pairs associates with this ontology.
Source code inontopy/patch.py
def items(self):\n \"\"\"Return a generator over annotation property (name, value_list)\n pairs associates with this ontology.\"\"\"\n namespace = self.namespace\n for annotation in namespace.annotation_properties():\n if namespace._has_data_triple_spod(\n s=namespace.storid, p=annotation.storid\n ):\n yield annotation, getattr(self, annotation.name)\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.keys","title":"keys(self)
","text":"Return a generator over annotation property names associated with this ontology.
Source code inontopy/patch.py
def keys(self):\n \"\"\"Return a generator over annotation property names associated\n with this ontology.\"\"\"\n namespace = self.namespace\n for annotation in namespace.annotation_properties():\n if namespace._has_data_triple_spod(\n s=namespace.storid, p=annotation.storid\n ):\n yield annotation\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.namespace_init","title":"namespace_init(self, world_or_ontology, base_iri, name=None)
","text":"init function for the Namespace
class.
ontopy/patch.py
def namespace_init(self, world_or_ontology, base_iri, name=None):\n \"\"\"__init__ function for the `Namespace` class.\"\"\"\n orig_namespace_init(self, world_or_ontology, base_iri, name)\n if self.name.endswith(\".ttl\"):\n self.name = self.name[:-4]\n
"},{"location":"api_reference/ontopy/patch/#ontopy.patch.render_func","title":"render_func(entity)
","text":"Improve default rendering of entities.
Source code inontopy/patch.py
def render_func(entity):\n \"\"\"Improve default rendering of entities.\"\"\"\n if hasattr(entity, \"prefLabel\") and entity.prefLabel:\n name = entity.prefLabel[0]\n elif hasattr(entity, \"label\") and entity.label:\n name = entity.label[0]\n elif hasattr(entity, \"altLabel\") and entity.altLabel:\n name = entity.altLabel[0]\n else:\n name = entity.name\n return f\"{entity.namespace.name}.{name}\"\n
"},{"location":"api_reference/ontopy/testutils/","title":"testutils","text":"Module primarly intended to be imported by tests.
It defines some directories and some utility functions that can be used with and without conftest.
"},{"location":"api_reference/ontopy/testutils/#ontopy.testutils.get_tool_module","title":"get_tool_module(name)
","text":"Imports and returns the module for the EMMOntoPy tool corresponding to name
.
ontopy/testutils.py
def get_tool_module(name):\n \"\"\"Imports and returns the module for the EMMOntoPy tool\n corresponding to `name`.\"\"\"\n if str(toolsdir) not in sys.path:\n sys.path.append(str(toolsdir))\n\n # For Python 3.4+\n spec = spec_from_loader(name, SourceFileLoader(name, str(toolsdir / name)))\n module = module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n
"},{"location":"api_reference/ontopy/utils/","title":"utils","text":"Some generic utility functions.
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.AmbiguousLabelError","title":" AmbiguousLabelError (LookupError, AttributeError, EMMOntoPyException)
","text":"Error raised when a label is ambiguous.
Source code inontopy/utils.py
class AmbiguousLabelError(LookupError, AttributeError, EMMOntoPyException):\n \"\"\"Error raised when a label is ambiguous.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.EMMOntoPyException","title":" EMMOntoPyException (Exception)
","text":"A BaseException class for EMMOntoPy
Source code inontopy/utils.py
class EMMOntoPyException(Exception):\n \"\"\"A BaseException class for EMMOntoPy\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.EMMOntoPyWarning","title":" EMMOntoPyWarning (Warning)
","text":"A BaseWarning class for EMMOntoPy
Source code inontopy/utils.py
class EMMOntoPyWarning(Warning):\n \"\"\"A BaseWarning class for EMMOntoPy\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.EntityClassDefinitionError","title":" EntityClassDefinitionError (EMMOntoPyException)
","text":"Error in ThingClass definition.
Source code inontopy/utils.py
class EntityClassDefinitionError(EMMOntoPyException):\n \"\"\"Error in ThingClass definition.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.IncompatibleVersion","title":" IncompatibleVersion (EMMOntoPyWarning)
","text":"An installed dependency version may be incompatible with a functionality of this package - or rather an outcome of a functionality. This is not critical, hence this is only a warning.
Source code inontopy/utils.py
class IncompatibleVersion(EMMOntoPyWarning):\n \"\"\"An installed dependency version may be incompatible with a functionality\n of this package - or rather an outcome of a functionality.\n This is not critical, hence this is only a warning.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.IndividualWarning","title":" IndividualWarning (EMMOntoPyWarning)
","text":"A warning related to an individual, e.g. punning.
Source code inontopy/utils.py
class IndividualWarning(EMMOntoPyWarning):\n \"\"\"A warning related to an individual, e.g. punning.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.LabelDefinitionError","title":" LabelDefinitionError (EMMOntoPyException)
","text":"Error in label definition.
Source code inontopy/utils.py
class LabelDefinitionError(EMMOntoPyException):\n \"\"\"Error in label definition.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.NoSuchLabelError","title":" NoSuchLabelError (LookupError, AttributeError, EMMOntoPyException)
","text":"Error raised when a label cannot be found.
Source code inontopy/utils.py
class NoSuchLabelError(LookupError, AttributeError, EMMOntoPyException):\n \"\"\"Error raised when a label cannot be found.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.ReadCatalogError","title":" ReadCatalogError (OSError)
","text":"Error reading catalog file.
Source code inontopy/utils.py
class ReadCatalogError(IOError):\n \"\"\"Error reading catalog file.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.UnknownVersion","title":" UnknownVersion (EMMOntoPyException)
","text":"Cannot retrieve version from a package.
Source code inontopy/utils.py
class UnknownVersion(EMMOntoPyException):\n \"\"\"Cannot retrieve version from a package.\"\"\"\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.annotate_source","title":"annotate_source(onto, imported=True)
","text":"Annotate all entities with the base IRI of the ontology using rdfs:isDefinedBy
annotations.
If imported
is true, all entities in imported sub-ontologies will also be annotated.
This is contextual information that is otherwise lost when the ontology is squashed and/or inferred.
Source code inontopy/utils.py
def annotate_source(onto, imported=True):\n \"\"\"Annotate all entities with the base IRI of the ontology using\n `rdfs:isDefinedBy` annotations.\n\n If `imported` is true, all entities in imported sub-ontologies will\n also be annotated.\n\n This is contextual information that is otherwise lost when the ontology\n is squashed and/or inferred.\n \"\"\"\n source = onto._abbreviate(\n \"http://www.w3.org/2000/01/rdf-schema#isDefinedBy\"\n )\n for entity in onto.get_entities(imported=imported):\n triple = (\n entity.storid,\n source,\n onto._abbreviate(entity.namespace.ontology.base_iri),\n )\n if not onto._has_obj_triple_spo(*triple):\n onto._add_obj_triple_spo(*triple)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.asstring","title":"asstring(expr, link='{label}', recursion_depth=0, exclude_object=False, ontology=None)
","text":"Returns a string representation of expr
.
Parameters:
Name Type Description Defaultexpr
The entity, restriction or a logical expression or these to represent.
requiredlink
A template for links. May contain the following variables: - {iri}: The full IRI of the concept. - {name}: Name-part of IRI. - {ref}: \"#{name}\" if the base iri of hte ontology has the same root as {iri}, otherwise \"{iri}\". - {label}: The label of the concept. - {lowerlabel}: The label of the concept in lower case and with spaces replaced with hyphens.
'{label}'
recursion_depth
Recursion depth. Only intended for internal use.
0
exclude_object
If true, the object will be excluded in restrictions.
False
ontology
Ontology object.
None
Returns:
Type Descriptionstr
String representation of expr
.
ontopy/utils.py
def asstring(\n expr,\n link=\"{label}\",\n recursion_depth=0,\n exclude_object=False,\n ontology=None,\n) -> str:\n \"\"\"Returns a string representation of `expr`.\n\n Arguments:\n expr: The entity, restriction or a logical expression or these\n to represent.\n link: A template for links. May contain the following variables:\n - {iri}: The full IRI of the concept.\n - {name}: Name-part of IRI.\n - {ref}: \"#{name}\" if the base iri of hte ontology has the same\n root as {iri}, otherwise \"{iri}\".\n - {label}: The label of the concept.\n - {lowerlabel}: The label of the concept in lower case and with\n spaces replaced with hyphens.\n recursion_depth: Recursion depth. Only intended for internal use.\n exclude_object: If true, the object will be excluded in restrictions.\n ontology: Ontology object.\n\n Returns:\n String representation of `expr`.\n \"\"\"\n # pylint: disable=too-many-return-statements,too-many-branches,too-many-statements\n if ontology is None:\n ontology = expr.ontology\n\n def fmt(entity):\n \"\"\"Returns the formatted label of an entity.\"\"\"\n if isinstance(entity, str):\n if ontology and ontology.world[entity]:\n iri = ontology.world[entity].iri\n elif (\n ontology\n and re.match(\"^[a-zA-Z0-9_+-]+$\", entity)\n and entity in ontology\n ):\n iri = ontology[entity].iri\n else:\n # This may not be a valid IRI, but the best we can do\n iri = entity\n label = entity\n else:\n iri = entity.iri\n label = get_label(entity)\n name = getiriname(iri)\n start = iri.split(\"#\", 1)[0] if \"#\" in iri else iri.rsplit(\"/\", 1)[0]\n ref = f\"#{name}\" if ontology.base_iri.startswith(start) else iri\n return link.format(\n entity=entity,\n name=name,\n ref=ref,\n iri=iri,\n label=label,\n lowerlabel=label.lower().replace(\" \", \"-\"),\n )\n\n if isinstance(expr, str):\n # return link.format(name=expr)\n return fmt(expr)\n if isinstance(expr, owlready2.Restriction):\n rlabel = owlready2.class_construct._restriction_type_2_label[expr.type]\n\n if isinstance(\n expr.property,\n (owlready2.ObjectPropertyClass, owlready2.DataPropertyClass),\n ):\n res = fmt(expr.property)\n elif isinstance(expr.property, owlready2.Inverse):\n string = asstring(\n expr.property.property,\n link,\n recursion_depth + 1,\n ontology=ontology,\n )\n res = f\"Inverse({string})\"\n else:\n print(\n f\"*** WARNING: unknown restriction property: {expr.property!r}\"\n )\n res = fmt(expr.property)\n\n if not rlabel:\n pass\n elif expr.type in (owlready2.MIN, owlready2.MAX, owlready2.EXACTLY):\n res += f\" {rlabel} {expr.cardinality}\"\n elif expr.type in (\n owlready2.SOME,\n owlready2.ONLY,\n owlready2.VALUE,\n owlready2.HAS_SELF,\n ):\n res += f\" {rlabel}\"\n else:\n print(\"*** WARNING: unknown relation\", expr, rlabel)\n res += f\" {rlabel}\"\n\n if not exclude_object:\n string = asstring(\n expr.value, link, recursion_depth + 1, ontology=ontology\n )\n res += (\n f\" {string!r}\" if isinstance(expr.value, str) else f\" {string}\"\n )\n return res\n if isinstance(expr, owlready2.Or):\n res = \" or \".join(\n [\n asstring(c, link, recursion_depth + 1, ontology=ontology)\n for c in expr.Classes\n ]\n )\n return res if recursion_depth == 0 else f\"({res})\"\n if isinstance(expr, owlready2.And):\n res = \" and \".join(\n [\n asstring(c, link, recursion_depth + 1, ontology=ontology)\n for c in expr.Classes\n ]\n )\n return res if recursion_depth == 0 else f\"({res})\"\n if isinstance(expr, owlready2.Not):\n string = asstring(\n expr.Class, link, recursion_depth + 1, ontology=ontology\n )\n return f\"not {string}\"\n if isinstance(expr, owlready2.ThingClass):\n return fmt(expr)\n if isinstance(expr, owlready2.PropertyClass):\n return fmt(expr)\n if isinstance(expr, owlready2.Thing): # instance (individual)\n return fmt(expr)\n if isinstance(expr, owlready2.class_construct.Inverse):\n return f\"inverse({fmt(expr.property)})\"\n if isinstance(expr, owlready2.disjoint.AllDisjoint):\n return fmt(expr)\n\n if isinstance(expr, (bool, int, float)):\n return repr(expr)\n # Check for subclasses\n if inspect.isclass(expr):\n if issubclass(expr, (bool, int, float, str)):\n return fmt(expr.__class__.__name__)\n if issubclass(expr, datetime.date):\n return \"date\"\n if issubclass(expr, datetime.time):\n return \"datetime\"\n if issubclass(expr, datetime.datetime):\n return \"datetime\"\n\n raise RuntimeError(f\"Unknown expression: {expr!r} (type: {type(expr)!r})\")\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.camelsplit","title":"camelsplit(string)
","text":"Splits CamelCase string before upper case letters (except if there is a sequence of upper case letters).
Source code inontopy/utils.py
def camelsplit(string):\n \"\"\"Splits CamelCase string before upper case letters (except\n if there is a sequence of upper case letters).\"\"\"\n if len(string) < 2:\n return string\n result = []\n prev_lower = False\n prev_isspace = True\n char = string[0]\n for next_char in string[1:]:\n if (not prev_isspace and char.isupper() and next_char.islower()) or (\n prev_lower and char.isupper()\n ):\n result.append(\" \")\n result.append(char)\n prev_lower = char.islower()\n prev_isspace = char.isspace()\n char = next_char\n result.append(char)\n return \"\".join(result)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.convert_imported","title":"convert_imported(input_ontology, output_ontology, input_format=None, output_format='xml', url_from_catalog=None, catalog_file='catalog-v001.xml')
","text":"Convert imported ontologies.
Store the output in a directory structure matching the source files. This require catalog file(s) to be present.
Warning
To convert to Turtle (.ttl
) format, you must have installed rdflib>=6.0.0
. See Known issues for more information.
Parameters:
Name Type Description Defaultinput_ontology
Union[Path, str]
input ontology file name
requiredoutput_ontology
Union[Path, str]
output ontology file path. The directory part of output
will be the root of the generated directory structure
input_format
Optional[str]
input format. The default is to infer from input_ontology
None
output_format
str
output format. The default is to infer from output_ontology
'xml'
url_from_catalog
Optional[bool]
Whether to read urls form catalog file. If False, the catalog file will be used if it exists.
None
catalog_file
str
name of catalog file, that maps ontology IRIs to local file names
'catalog-v001.xml'
Source code in ontopy/utils.py
def convert_imported( # pylint: disable=too-many-arguments,too-many-locals\n input_ontology: \"Union[Path, str]\",\n output_ontology: \"Union[Path, str]\",\n input_format: \"Optional[str]\" = None,\n output_format: str = \"xml\",\n url_from_catalog: \"Optional[bool]\" = None,\n catalog_file: str = \"catalog-v001.xml\",\n):\n \"\"\"Convert imported ontologies.\n\n Store the output in a directory structure matching the source\n files. This require catalog file(s) to be present.\n\n Warning:\n To convert to Turtle (`.ttl`) format, you must have installed\n `rdflib>=6.0.0`. See [Known issues](../../../#known-issues) for\n more information.\n\n Args:\n input_ontology: input ontology file name\n output_ontology: output ontology file path. The directory part of\n `output` will be the root of the generated directory structure\n input_format: input format. The default is to infer from\n `input_ontology`\n output_format: output format. The default is to infer from\n `output_ontology`\n url_from_catalog: Whether to read urls form catalog file.\n If False, the catalog file will be used if it exists.\n catalog_file: name of catalog file, that maps ontology IRIs to\n local file names\n \"\"\"\n inroot = os.path.dirname(os.path.abspath(input_ontology))\n outroot = os.path.dirname(os.path.abspath(output_ontology))\n outext = os.path.splitext(output_ontology)[1]\n\n if url_from_catalog is None:\n url_from_catalog = os.path.exists(os.path.join(inroot, catalog_file))\n\n if url_from_catalog:\n iris, dirs = read_catalog(\n inroot, catalog_file=catalog_file, recursive=True, return_paths=True\n )\n\n # Create output dirs and copy catalog files\n for indir in dirs:\n outdir = os.path.normpath(\n os.path.join(outroot, os.path.relpath(indir, inroot))\n )\n if not os.path.exists(outdir):\n os.makedirs(outdir)\n with open(\n os.path.join(indir, catalog_file), mode=\"rt\", encoding=\"utf8\"\n ) as handle:\n content = handle.read()\n for path in iris.values():\n newpath = os.path.splitext(path)[0] + outext\n content = content.replace(\n os.path.basename(path), os.path.basename(newpath)\n )\n with open(\n os.path.join(outdir, catalog_file), mode=\"wt\", encoding=\"utf8\"\n ) as handle:\n handle.write(content)\n else:\n iris = {}\n\n outpaths = set()\n\n def recur(graph, outext):\n for imported in graph.objects(\n predicate=URIRef(\"http://www.w3.org/2002/07/owl#imports\")\n ):\n inpath = iris.get(str(imported), str(imported))\n if inpath.startswith((\"http://\", \"https://\", \"ftp://\")):\n outpath = os.path.join(outroot, inpath.split(\"/\")[-1])\n else:\n outpath = os.path.join(outroot, os.path.relpath(inpath, inroot))\n outpath = os.path.splitext(os.path.normpath(outpath))[0] + outext\n if outpath not in outpaths:\n outpaths.add(outpath)\n fmt = (\n input_format\n if input_format\n else guess_format(inpath, fmap=FMAP)\n )\n new_graph = Graph()\n new_graph.parse(iris.get(inpath, inpath), format=fmt)\n new_graph.serialize(destination=outpath, format=output_format)\n recur(new_graph, outext)\n\n # Write output files\n fmt = (\n input_format\n if input_format\n else guess_format(input_ontology, fmap=FMAP)\n )\n\n if not _validate_installed_version(\n package=\"rdflib\", min_version=\"6.0.0\"\n ) and (output_format == FMAP.get(\"ttl\", \"\") or outext == \"ttl\"):\n from rdflib import ( # pylint: disable=import-outside-toplevel\n __version__ as __rdflib_version__,\n )\n\n warnings.warn(\n IncompatibleVersion(\n \"To correctly convert to Turtle format, rdflib must be \"\n \"version 6.0.0 or greater, however, the detected rdflib \"\n \"version used by your Python interpreter is \"\n f\"{__rdflib_version__!r}. For more information see the \"\n \"'Known issues' section of the README.\"\n )\n )\n\n graph = Graph()\n try:\n graph.parse(input_ontology, format=fmt)\n except PluginException as exc: # Add input_ontology to exception msg\n raise PluginException(\n f'Cannot load \"{input_ontology}\": {exc.msg}'\n ).with_traceback(exc.__traceback__)\n graph.serialize(destination=output_ontology, format=output_format)\n recur(graph, outext)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.copy_annotation","title":"copy_annotation(onto, src, dst)
","text":"In all classes and properties in onto
, copy annotation src
to dst
.
Parameters:
Name Type Description Defaultonto
Ontology to work on.
requiredsrc
Name of source annotation.
requireddst
Name or IRI of destination annotation. Use IRI if the destination annotation is not already in the ontology.
required Source code inontopy/utils.py
def copy_annotation(onto, src, dst):\n \"\"\"In all classes and properties in `onto`, copy annotation `src` to `dst`.\n\n Arguments:\n onto: Ontology to work on.\n src: Name of source annotation.\n dst: Name or IRI of destination annotation. Use IRI if the\n destination annotation is not already in the ontology.\n \"\"\"\n if onto.world[src]:\n src = onto.world[src]\n else:\n src = onto[src]\n\n if onto.world[dst]:\n dst = onto.world[dst]\n elif dst in onto:\n dst = onto[dst]\n else:\n if \"://\" not in dst:\n raise ValueError(\n \"new destination annotation property must be provided as \"\n \"a full IRI\"\n )\n name = min(dst.rsplit(\"#\")[-1], dst.rsplit(\"/\")[-1], key=len)\n iri = dst\n dst = onto.new_annotation_property(name, owlready2.AnnotationProperty)\n dst.iri = iri\n\n for e in onto.get_entities():\n new = getattr(e, src.name).first()\n if new and new not in getattr(e, dst.name):\n getattr(e, dst.name).append(new)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.directory_layout","title":"directory_layout(onto)
","text":"Analyse IRIs of imported ontologies and suggested a directory layout for saving recursively.
Parameters:
Name Type Description Defaultonto
Ontology to analyse.
requiredReturns:
Type Descriptionlayout
A dict mapping ontology objects to relative path names derived from the ontology IRIs. No file name extension are added.
Examples:
Assume that our ontology onto
has IRI ex:onto
. If it directly or indirectly imports ontologies with IRIs ex:A/ontoA
, ex:B/ontoB
and ex:A/C/ontoC
, this function will return the following dict:
{\n onto: \"onto\",\n ontoA: \"A/ontoA\",\n ontoB: \"B/ontoB\",\n ontoC: \"A/C/ontoC\",\n}\n
where ontoA
, ontoB
and ontoC
are imported Ontology objects.
ontopy/utils.py
def directory_layout(onto):\n \"\"\"Analyse IRIs of imported ontologies and suggested a directory\n layout for saving recursively.\n\n Arguments:\n onto: Ontology to analyse.\n\n Returns:\n layout: A dict mapping ontology objects to relative path names\n derived from the ontology IRIs. No file name extension are\n added.\n\n Example:\n Assume that our ontology `onto` has IRI `ex:onto`. If it directly\n or indirectly imports ontologies with IRIs `ex:A/ontoA`, `ex:B/ontoB`\n and `ex:A/C/ontoC`, this function will return the following dict:\n\n {\n onto: \"onto\",\n ontoA: \"A/ontoA\",\n ontoB: \"B/ontoB\",\n ontoC: \"A/C/ontoC\",\n }\n\n where `ontoA`, `ontoB` and `ontoC` are imported Ontology objects.\n \"\"\"\n all_imported = [\n imported.base_iri for imported in onto.indirectly_imported_ontologies()\n ]\n # get protocol and domain of all imported ontologies\n namespace_roots = set()\n for iri in all_imported:\n protocol, domain, *_ = urllib.parse.urlsplit(iri)\n namespace_roots.add(\"://\".join([protocol, domain]))\n\n def recur(o):\n baseiri = o.base_iri.rstrip(\"/#\")\n\n # Some heuristics here to reproduce the EMMO layout.\n # It might not apply to all ontologies, so maybe it should be\n # made optional? Alternatively, change EMMO ontology IRIs to\n # match the directory layout.\n emmolayout = (\n any(\n oo.base_iri.startswith(baseiri + \"/\")\n for oo in o.imported_ontologies\n )\n or o.base_iri == \"http://emmo.info/emmo/mereocausality#\"\n )\n\n layout[o] = (\n baseiri + \"/\" + os.path.basename(baseiri) if emmolayout else baseiri\n )\n for imported in o.imported_ontologies:\n if imported not in layout:\n recur(imported)\n\n layout = {}\n recur(onto)\n # Strip off initial common prefix from all paths\n if len(namespace_roots) == 1:\n prefix = os.path.commonprefix(list(layout.values()))\n for o, path in layout.items():\n layout[o] = path[len(prefix) :].lstrip(\"/\")\n else:\n for o, path in layout.items():\n for namespace_root in namespace_roots:\n if path.startswith(namespace_root):\n layout[o] = (\n urllib.parse.urlsplit(namespace_root)[1]\n + path[len(namespace_root) :]\n )\n\n return layout\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.english","title":"english(string)
","text":"Returns string
as an English location string.
ontopy/utils.py
def english(string):\n \"\"\"Returns `string` as an English location string.\"\"\"\n return owlready2.locstr(string, lang=\"en\")\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.get_format","title":"get_format(outfile, default, fmt=None)
","text":"Infer format from outfile and format.
Source code inontopy/utils.py
def get_format(outfile: str, default: str, fmt: str = None):\n \"\"\"Infer format from outfile and format.\"\"\"\n if fmt is None:\n fmt = os.path.splitext(outfile)[1]\n if not fmt:\n fmt = default\n return fmt.lstrip(\".\")\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.get_label","title":"get_label(entity)
","text":"Returns the label of an entity.
Source code inontopy/utils.py
def get_label(entity):\n \"\"\"Returns the label of an entity.\"\"\"\n # pylint: disable=too-many-return-statements\n if hasattr(entity, \"namespace\"):\n onto = entity.namespace.ontology\n if onto.label_annotations:\n for la in onto.label_annotations:\n try:\n label = entity[la]\n if label:\n return get_preferred_language(label)\n except (NoSuchLabelError, AttributeError, TypeError):\n continue\n if hasattr(entity, \"prefLabel\") and entity.prefLabel:\n return get_preferred_language(entity.prefLabel)\n if hasattr(entity, \"label\") and entity.label:\n return get_preferred_language(entity.label)\n if hasattr(entity, \"__name__\"):\n return entity.__name__\n if hasattr(entity, \"name\"):\n return str(entity.name)\n if isinstance(entity, str):\n return entity\n return repr(entity)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.get_preferred_language","title":"get_preferred_language(langstrings, lang=None)
","text":"Given a list of localised strings, return the one in language lang
. If lang
is not given, use ontopy.utils.PREFERRED_LANGUAGE
. If no one match is found, return the first one with no language tag or fallback to the first string.
The preferred language is stored as a module variable. You can change it with:
import ontopy.utils ontopy.utils.PREFERRED_LANGUAGE = \"en\"
Source code inontopy/utils.py
def get_preferred_language(langstrings: list, lang=None) -> str:\n \"\"\"Given a list of localised strings, return the one in language\n `lang`. If `lang` is not given, use\n `ontopy.utils.PREFERRED_LANGUAGE`. If no one match is found,\n return the first one with no language tag or fallback to the first\n string.\n\n The preferred language is stored as a module variable. You can\n change it with:\n\n >>> import ontopy.utils\n >>> ontopy.utils.PREFERRED_LANGUAGE = \"en\"\n\n \"\"\"\n if lang is None:\n lang = PREFERRED_LANGUAGE\n for langstr in langstrings:\n if hasattr(langstr, \"lang\") and langstr.lang == lang:\n return str(langstr)\n for langstr in langstrings:\n if not hasattr(langstr, \"lang\"):\n return langstr\n return str(langstrings[0])\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.getiriname","title":"getiriname(iri)
","text":"Return name part of an IRI.
The name part is what follows after the last slash or hash.
Source code inontopy/utils.py
def getiriname(iri):\n \"\"\"Return name part of an IRI.\n\n The name part is what follows after the last slash or hash.\n \"\"\"\n res = urllib.parse.urlparse(iri)\n return res.fragment if res.fragment else res.path.rsplit(\"/\", 1)[-1]\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.infer_version","title":"infer_version(iri, version_iri)
","text":"Infer version from IRI and versionIRI.
Source code inontopy/utils.py
def infer_version(iri, version_iri):\n \"\"\"Infer version from IRI and versionIRI.\"\"\"\n if str(version_iri[: len(iri)]) == str(iri):\n version = version_iri[len(iri) :].lstrip(\"/\")\n else:\n j = 0\n version_parts = []\n for i, char in enumerate(iri):\n while i + j < len(version_iri) and char != version_iri[i + j]:\n version_parts.append(version_iri[i + j])\n j += 1\n version = \"\".join(version_parts).lstrip(\"/\").rstrip(\"/#\")\n\n if \"/\" in version:\n raise ValueError(\n f\"version IRI {version_iri!r} is not consistent with base IRI \"\n f\"{iri!r}\"\n )\n return version\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.isinteractive","title":"isinteractive()
","text":"Returns true if we are running from an interactive interpreater, false otherwise.
Source code inontopy/utils.py
def isinteractive():\n \"\"\"Returns true if we are running from an interactive interpreater,\n false otherwise.\"\"\"\n return bool(\n hasattr(__builtins__, \"__IPYTHON__\")\n or sys.flags.interactive\n or hasattr(sys, \"ps1\")\n )\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.normalise_url","title":"normalise_url(url)
","text":"Returns url
in a normalised form.
ontopy/utils.py
def normalise_url(url):\n \"\"\"Returns `url` in a normalised form.\"\"\"\n splitted = urllib.parse.urlsplit(url)\n components = list(splitted)\n components[2] = os.path.normpath(splitted.path)\n return urllib.parse.urlunsplit(components)\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.read_catalog","title":"read_catalog(uri, catalog_file='catalog-v001.xml', baseuri=None, recursive=False, relative_to=None, return_paths=False, visited_iris=None, visited_paths=None)
","text":"Reads a Prot\u00e8g\u00e8 catalog file and returns as a dict.
The returned dict maps the ontology IRI (name) to its actual location (URI). The location can be either an absolute file path or a HTTP, HTTPS or FTP web location.
uri
is a string locating the catalog file. It may be a http or https web location or a file path.
The catalog_file
argument spesifies the catalog file name and is used if path
is used when recursive
is true or when path
is a directory.
If baseuri
is not None, it will be used as the base URI for the mapped locations. Otherwise it defaults to uri
with its final component omitted.
If recursive
is true, catalog files in sub-folders are also read.
if relative_to
is given, the paths in the returned dict will be relative to this path.
If return_paths
is true, a set of directory paths to source files is returned in addition to the default dict.
The visited_uris
and visited_paths
arguments are only intended for internal use to avoid infinite recursions.
A ReadCatalogError is raised if the catalog file cannot be found.
Source code inontopy/utils.py
def read_catalog( # pylint: disable=too-many-locals,too-many-statements,too-many-arguments\n uri,\n catalog_file=\"catalog-v001.xml\",\n baseuri=None,\n recursive=False,\n relative_to=None,\n return_paths=False,\n visited_iris=None,\n visited_paths=None,\n):\n \"\"\"Reads a Prot\u00e8g\u00e8 catalog file and returns as a dict.\n\n The returned dict maps the ontology IRI (name) to its actual\n location (URI). The location can be either an absolute file path\n or a HTTP, HTTPS or FTP web location.\n\n `uri` is a string locating the catalog file. It may be a http or\n https web location or a file path.\n\n The `catalog_file` argument spesifies the catalog file name and is\n used if `path` is used when `recursive` is true or when `path` is a\n directory.\n\n If `baseuri` is not None, it will be used as the base URI for the\n mapped locations. Otherwise it defaults to `uri` with its final\n component omitted.\n\n If `recursive` is true, catalog files in sub-folders are also read.\n\n if `relative_to` is given, the paths in the returned dict will be\n relative to this path.\n\n If `return_paths` is true, a set of directory paths to source\n files is returned in addition to the default dict.\n\n The `visited_uris` and `visited_paths` arguments are only intended for\n internal use to avoid infinite recursions.\n\n A ReadCatalogError is raised if the catalog file cannot be found.\n \"\"\"\n # pylint: disable=too-many-branches\n\n # Protocols supported by urllib.request\n web_protocols = \"http://\", \"https://\", \"ftp://\"\n uri = str(uri) # in case uri is a pathlib.Path object\n iris = visited_iris if visited_iris else {}\n dirs = visited_paths if visited_paths else set()\n if uri in iris:\n return (iris, dirs) if return_paths else iris\n\n if uri.startswith(web_protocols):\n # Call read_catalog() recursively to ensure that the temporary\n # file is properly cleaned up\n with tempfile.TemporaryDirectory() as tmpdir:\n destfile = os.path.join(tmpdir, catalog_file)\n uris = { # maps uri to base\n uri: (baseuri if baseuri else os.path.dirname(uri)),\n f'{uri.rstrip(\"/\")}/{catalog_file}': (\n baseuri if baseuri else uri.rstrip(\"/\")\n ),\n f\"{os.path.dirname(uri)}/{catalog_file}\": (\n os.path.dirname(uri)\n ),\n }\n for url, base in uris.items():\n try:\n # The URL can only contain the schemes from `web_protocols`.\n _, msg = urllib.request.urlretrieve(url, destfile) # nosec\n except urllib.request.URLError:\n continue\n else:\n if \"Content-Length\" not in msg:\n continue\n\n return read_catalog(\n destfile,\n catalog_file=catalog_file,\n baseuri=baseuri if baseuri else base,\n recursive=recursive,\n return_paths=return_paths,\n visited_iris=iris,\n visited_paths=dirs,\n )\n raise ReadCatalogError(\n \"Cannot download catalog from URLs: \" + \", \".join(uris)\n )\n elif uri.startswith(\"file://\"):\n path = uri[7:]\n else:\n path = uri\n\n if os.path.isdir(path):\n dirname = os.path.abspath(path)\n filepath = os.path.join(dirname, catalog_file)\n else:\n catalog_file = os.path.basename(path)\n filepath = os.path.abspath(path)\n dirname = os.path.dirname(filepath)\n\n def gettag(entity):\n return entity.tag.rsplit(\"}\", 1)[-1]\n\n def load_catalog(filepath):\n if not os.path.exists(filepath):\n raise ReadCatalogError(\"No such catalog file: \" + filepath)\n dirname = os.path.normpath(os.path.dirname(filepath))\n dirs.add(baseuri if baseuri else dirname)\n xml = ET.parse(filepath)\n root = xml.getroot()\n if gettag(root) != \"catalog\":\n raise ReadCatalogError(\n f\"expected root tag of catalog file {filepath!r} to be \"\n '\"catalog\"'\n )\n for child in root:\n if gettag(child) == \"uri\":\n load_uri(child, dirname)\n elif gettag(child) == \"group\":\n for uri in child:\n load_uri(uri, dirname)\n\n def load_uri(uri, dirname):\n if gettag(uri) != \"uri\":\n raise ValueError(f\"{gettag(uri)!r} should be 'uri'.\")\n uri_as_str = uri.attrib[\"uri\"]\n if uri_as_str.startswith(web_protocols):\n url = uri_as_str\n else:\n uri_as_str = os.path.normpath(uri_as_str)\n if baseuri and baseuri.startswith(web_protocols):\n url = f\"{baseuri}/{uri_as_str}\"\n else:\n url = os.path.join(baseuri if baseuri else dirname, uri_as_str)\n\n iris.setdefault(uri.attrib[\"name\"], url)\n if recursive:\n directory = os.path.dirname(url)\n if directory not in dirs:\n catalog = os.path.join(directory, catalog_file)\n if catalog.startswith(web_protocols):\n iris_, dirs_ = read_catalog(\n catalog,\n catalog_file=catalog_file,\n baseuri=None,\n recursive=recursive,\n return_paths=True,\n visited_iris=iris,\n visited_paths=dirs,\n )\n iris.update(iris_)\n dirs.update(dirs_)\n else:\n load_catalog(catalog)\n\n load_catalog(filepath)\n\n if relative_to:\n for iri, path in iris.items():\n iris[iri] = os.path.relpath(path, relative_to)\n\n if return_paths:\n return iris, dirs\n return iris\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.rename_iris","title":"rename_iris(onto, annotation='prefLabel')
","text":"For IRIs with the given annotation, change the name of the entity to the value of the annotation. Also add an skos:exactMatch
annotation referring to the old IRI.
ontopy/utils.py
def rename_iris(onto, annotation=\"prefLabel\"):\n \"\"\"For IRIs with the given annotation, change the name of the entity\n to the value of the annotation. Also add an `skos:exactMatch`\n annotation referring to the old IRI.\n \"\"\"\n exactMatch = onto._abbreviate( # pylint:disable=invalid-name\n \"http://www.w3.org/2004/02/skos/core#exactMatch\"\n )\n for entity in onto.get_entities():\n if hasattr(entity, annotation) and getattr(entity, annotation):\n onto._add_data_triple_spod(\n entity.storid, exactMatch, entity.iri, \"\"\n )\n entity.name = getattr(entity, annotation).first()\n
"},{"location":"api_reference/ontopy/utils/#ontopy.utils.write_catalog","title":"write_catalog(irimap, output='catalog-v001.xml', directory='.', relative_paths=True, append=False)
","text":"Write catalog file do disk.
Parameters:
Name Type Description Defaultirimap
dict
dict mapping ontology IRIs (name) to actual locations (URIs). It has the same format as the dict returned by read_catalog().
requiredoutput
Union[str, Path]
name of catalog file.
'catalog-v001.xml'
directory
Union[str, Path]
directory path to the catalog file. Only used if output
is a relative path.
'.'
relative_paths
bool
whether to write file paths inside the catalog as relative paths (instead of absolute paths).
True
append
bool
whether to append to a possible existing catalog file. If false, an existing file will be overwritten.
False
Source code in ontopy/utils.py
def write_catalog(\n irimap: dict,\n output: \"Union[str, Path]\" = \"catalog-v001.xml\",\n directory: \"Union[str, Path]\" = \".\",\n relative_paths: bool = True,\n append: bool = False,\n): # pylint: disable=redefined-builtin\n \"\"\"Write catalog file do disk.\n\n Args:\n irimap: dict mapping ontology IRIs (name) to actual locations\n (URIs). It has the same format as the dict returned by\n read_catalog().\n output: name of catalog file.\n directory: directory path to the catalog file. Only used if `output`\n is a relative path.\n relative_paths: whether to write file paths inside the catalog as\n relative paths (instead of absolute paths).\n append: whether to append to a possible existing catalog file.\n If false, an existing file will be overwritten.\n \"\"\"\n filename = Path(directory) / output\n\n if relative_paths:\n irimap = irimap.copy() # don't modify provided irimap\n for iri, path in irimap.items():\n if os.path.isabs(path):\n irimap[iri] = os.path.relpath(path, filename.parent)\n\n if filename.exists() and append:\n iris = read_catalog(filename)\n iris.update(irimap)\n irimap = iris\n\n res = [\n '<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>',\n '<catalog prefer=\"public\" '\n 'xmlns=\"urn:oasis:names:tc:entity:xmlns:xml:catalog\">',\n ' <group id=\"Folder Repository, directory=, recursive=true, '\n 'Auto-Update=false, version=2\" prefer=\"public\" xml:base=\"\">',\n ]\n for iri, path in irimap.items():\n res.append(f' <uri name=\"{iri}\" uri=\"{path}\"/>')\n res.append(\" </group>\")\n res.append(\"</catalog>\")\n with open(filename, \"wt\") as handle:\n handle.write(\"\\n\".join(res) + \"\\n\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/","title":"factppgraph","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph--ontopyfactpluspluswrapperfactppgraph","title":"ontopy.factpluspluswrapper.factppgraph
","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph","title":" FaCTPPGraph
","text":"Class for running the FaCT++ reasoner (using OwlApiInterface) and postprocessing the resulting inferred ontology.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph--parameters","title":"Parameters","text":"graph : owlapi.Graph instance The graph to be inferred.
Source code inontopy/factpluspluswrapper/factppgraph.py
class FaCTPPGraph:\n \"\"\"Class for running the FaCT++ reasoner (using OwlApiInterface) and\n postprocessing the resulting inferred ontology.\n\n Parameters\n ----------\n graph : owlapi.Graph instance\n The graph to be inferred.\n \"\"\"\n\n def __init__(self, graph):\n self.graph = graph\n self._inferred = None\n self._namespaces = None\n self._base_iri = None\n\n @property\n def inferred(self):\n \"\"\"The current inferred graph.\"\"\"\n if self._inferred is None:\n self._inferred = self.raw_inferred_graph()\n return self._inferred\n\n @property\n def base_iri(self):\n \"\"\"Base iri of inferred ontology.\"\"\"\n if self._base_iri is None:\n self._base_iri = URIRef(self.asserted_base_iri() + \"-inferred\")\n return self._base_iri\n\n @base_iri.setter\n def base_iri(self, value):\n \"\"\"Assign inferred base iri.\"\"\"\n self._base_iri = URIRef(value)\n\n @property\n def namespaces(self):\n \"\"\"Namespaces defined in the original graph.\"\"\"\n if self._namespaces is None:\n self._namespaces = dict(self.graph.namespaces()).copy()\n self._namespaces[\"\"] = self.base_iri\n return self._namespaces\n\n def asserted_base_iri(self):\n \"\"\"Returns the base iri or the original graph.\"\"\"\n return URIRef(dict(self.graph.namespaces()).get(\"\", \"\").rstrip(\"#/\"))\n\n def raw_inferred_graph(self):\n \"\"\"Returns the raw non-postprocessed inferred ontology as a rdflib\n graph.\"\"\"\n return OwlApiInterface().reason(self.graph)\n\n def inferred_graph(self):\n \"\"\"Returns the postprocessed inferred graph.\"\"\"\n self.add_base_annotations()\n self.set_namespace()\n self.clean_base()\n self.remove_nothing_is_nothing()\n self.clean_ancestors()\n return self.inferred\n\n def add_base_annotations(self):\n \"\"\"Copy base annotations from original graph to the inferred graph.\"\"\"\n base = self.base_iri\n inferred = self.inferred\n for _, predicate, obj in self.graph.triples(\n (self.asserted_base_iri(), None, None)\n ):\n if predicate == OWL.versionIRI:\n version = obj.rsplit(\"/\", 1)[-1]\n obj = URIRef(f\"{base}/{version}\")\n inferred.add((base, predicate, obj))\n\n def set_namespace(self):\n \"\"\"Override namespace of inferred graph with the namespace of the\n original graph.\n \"\"\"\n inferred = self.inferred\n for key, value in self.namespaces.items():\n inferred.namespace_manager.bind(\n key, value, override=True, replace=True\n )\n\n def clean_base(self):\n \"\"\"Remove all relations `s? a owl:Ontology` where `s?` is not\n `base_iri`.\n \"\"\"\n inferred = self.inferred\n for (\n subject,\n predicate,\n obj,\n ) in inferred.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n inferred.remove((subject, predicate, obj))\n inferred.add((self.base_iri, RDF.type, OWL.Ontology))\n\n def remove_nothing_is_nothing(self):\n \"\"\"Remove superfluid relation in inferred graph:\n\n owl:Nothing rdfs:subClassOf owl:Nothing\n \"\"\"\n triple = OWL.Nothing, RDFS.subClassOf, OWL.Nothing\n inferred = self.inferred\n if triple in inferred:\n inferred.remove(triple)\n\n def clean_ancestors(self):\n \"\"\"Remove redundant rdfs:subClassOf relations in inferred graph.\"\"\"\n inferred = self.inferred\n for ( # pylint: disable=too-many-nested-blocks\n subject\n ) in inferred.subjects(RDF.type, OWL.Class):\n if isinstance(subject, URIRef):\n parents = set(\n parent\n for parent in inferred.objects(subject, RDFS.subClassOf)\n if isinstance(parent, URIRef)\n )\n if len(parents) > 1:\n for parent in parents:\n ancestors = set(\n inferred.transitive_objects(parent, RDFS.subClassOf)\n )\n for entity in parents:\n if entity != parent and entity in ancestors:\n triple = subject, RDFS.subClassOf, entity\n if triple in inferred:\n inferred.remove(triple)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.base_iri","title":"base_iri
property
writable
","text":"Base iri of inferred ontology.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.inferred","title":"inferred
property
readonly
","text":"The current inferred graph.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.namespaces","title":"namespaces
property
readonly
","text":"Namespaces defined in the original graph.
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.add_base_annotations","title":"add_base_annotations(self)
","text":"Copy base annotations from original graph to the inferred graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def add_base_annotations(self):\n \"\"\"Copy base annotations from original graph to the inferred graph.\"\"\"\n base = self.base_iri\n inferred = self.inferred\n for _, predicate, obj in self.graph.triples(\n (self.asserted_base_iri(), None, None)\n ):\n if predicate == OWL.versionIRI:\n version = obj.rsplit(\"/\", 1)[-1]\n obj = URIRef(f\"{base}/{version}\")\n inferred.add((base, predicate, obj))\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.asserted_base_iri","title":"asserted_base_iri(self)
","text":"Returns the base iri or the original graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def asserted_base_iri(self):\n \"\"\"Returns the base iri or the original graph.\"\"\"\n return URIRef(dict(self.graph.namespaces()).get(\"\", \"\").rstrip(\"#/\"))\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.clean_ancestors","title":"clean_ancestors(self)
","text":"Remove redundant rdfs:subClassOf relations in inferred graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def clean_ancestors(self):\n \"\"\"Remove redundant rdfs:subClassOf relations in inferred graph.\"\"\"\n inferred = self.inferred\n for ( # pylint: disable=too-many-nested-blocks\n subject\n ) in inferred.subjects(RDF.type, OWL.Class):\n if isinstance(subject, URIRef):\n parents = set(\n parent\n for parent in inferred.objects(subject, RDFS.subClassOf)\n if isinstance(parent, URIRef)\n )\n if len(parents) > 1:\n for parent in parents:\n ancestors = set(\n inferred.transitive_objects(parent, RDFS.subClassOf)\n )\n for entity in parents:\n if entity != parent and entity in ancestors:\n triple = subject, RDFS.subClassOf, entity\n if triple in inferred:\n inferred.remove(triple)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.clean_base","title":"clean_base(self)
","text":"Remove all relations s? a owl:Ontology
where s?
is not base_iri
.
ontopy/factpluspluswrapper/factppgraph.py
def clean_base(self):\n \"\"\"Remove all relations `s? a owl:Ontology` where `s?` is not\n `base_iri`.\n \"\"\"\n inferred = self.inferred\n for (\n subject,\n predicate,\n obj,\n ) in inferred.triples( # pylint: disable=not-an-iterable\n (None, RDF.type, OWL.Ontology)\n ):\n inferred.remove((subject, predicate, obj))\n inferred.add((self.base_iri, RDF.type, OWL.Ontology))\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.inferred_graph","title":"inferred_graph(self)
","text":"Returns the postprocessed inferred graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def inferred_graph(self):\n \"\"\"Returns the postprocessed inferred graph.\"\"\"\n self.add_base_annotations()\n self.set_namespace()\n self.clean_base()\n self.remove_nothing_is_nothing()\n self.clean_ancestors()\n return self.inferred\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.raw_inferred_graph","title":"raw_inferred_graph(self)
","text":"Returns the raw non-postprocessed inferred ontology as a rdflib graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def raw_inferred_graph(self):\n \"\"\"Returns the raw non-postprocessed inferred ontology as a rdflib\n graph.\"\"\"\n return OwlApiInterface().reason(self.graph)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.remove_nothing_is_nothing","title":"remove_nothing_is_nothing(self)
","text":"Remove superfluid relation in inferred graph:
owl:Nothing rdfs:subClassOf owl:Nothing
Source code inontopy/factpluspluswrapper/factppgraph.py
def remove_nothing_is_nothing(self):\n \"\"\"Remove superfluid relation in inferred graph:\n\n owl:Nothing rdfs:subClassOf owl:Nothing\n \"\"\"\n triple = OWL.Nothing, RDFS.subClassOf, OWL.Nothing\n inferred = self.inferred\n if triple in inferred:\n inferred.remove(triple)\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FaCTPPGraph.set_namespace","title":"set_namespace(self)
","text":"Override namespace of inferred graph with the namespace of the original graph.
Source code inontopy/factpluspluswrapper/factppgraph.py
def set_namespace(self):\n \"\"\"Override namespace of inferred graph with the namespace of the\n original graph.\n \"\"\"\n inferred = self.inferred\n for key, value in self.namespaces.items():\n inferred.namespace_manager.bind(\n key, value, override=True, replace=True\n )\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/factppgraph/#ontopy.factpluspluswrapper.factppgraph.FactPPError","title":" FactPPError
","text":"Postprocessing error after reasoning with FaCT++.
Source code inontopy/factpluspluswrapper/factppgraph.py
class FactPPError:\n \"\"\"Postprocessing error after reasoning with FaCT++.\"\"\"\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/","title":"owlapi_interface","text":"Python interface to the FaCT++ Reasoner.
This module is copied from the SimPhoNy project.
Original author: Matthias Urban
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface","title":" OwlApiInterface
","text":"Interface to the FaCT++ reasoner via OWLAPI.
Source code inontopy/factpluspluswrapper/owlapi_interface.py
class OwlApiInterface:\n \"\"\"Interface to the FaCT++ reasoner via OWLAPI.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the interface.\"\"\"\n\n def reason(self, graph):\n \"\"\"Generate the inferred axioms for a given Graph.\n\n Args:\n graph (Graph): An rdflib graph to execute the reasoner on.\n\n \"\"\"\n with tempfile.NamedTemporaryFile(\"wt\") as tmpdir:\n graph.serialize(tmpdir.name, format=\"xml\")\n return self._run(tmpdir.name, command=\"--run-reasoner\")\n\n def reason_files(self, *owl_files):\n \"\"\"Merge the given owl and generate the inferred axioms.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--run-reasoner\")\n\n def merge_files(self, *owl_files):\n \"\"\"Merge the given owl files and its import closure.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--merge-only\")\n\n @staticmethod\n def _run(\n *owl_files, command, output_file=None, return_graph=True\n ) -> rdflib.Graph:\n \"\"\"Run the FaCT++ reasoner using a java command.\n\n Args:\n *owl_files (str): Path to the owl files to load.\n command (str): Either --run-reasoner or --merge-only\n output_file (str, optional): Where the output should be stored.\n Defaults to None.\n return_graph (bool, optional): Whether the result should be parsed\n and returned. Defaults to True.\n\n Returns:\n The reasoned result.\n\n \"\"\"\n java_base = os.path.abspath(\n os.path.join(os.path.dirname(__file__), \"java\")\n )\n cmd = (\n [\n \"java\",\n \"-cp\",\n java_base + \"/lib/jars/*\",\n \"-Djava.library.path=\" + java_base + \"/lib/so\",\n \"org.simphony.OntologyLoader\",\n ]\n + [command]\n + list(owl_files)\n )\n logger.info(\"Running Reasoner\")\n logger.debug(\"Command %s\", cmd)\n subprocess.run(cmd, check=True) # nosec\n\n graph = None\n if return_graph:\n graph = rdflib.Graph()\n graph.parse(RESULT_FILE)\n if output_file:\n os.rename(RESULT_FILE, output_file)\n else:\n os.remove(RESULT_FILE)\n return graph\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.__init__","title":"__init__(self)
special
","text":"Initialize the interface.
Source code inontopy/factpluspluswrapper/owlapi_interface.py
def __init__(self):\n \"\"\"Initialize the interface.\"\"\"\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.merge_files","title":"merge_files(self, *owl_files)
","text":"Merge the given owl files and its import closure.
Parameters:
Name Type Description Default*owl_files
os.path
The owl files two merge.
()
Source code in ontopy/factpluspluswrapper/owlapi_interface.py
def merge_files(self, *owl_files):\n \"\"\"Merge the given owl files and its import closure.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--merge-only\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.reason","title":"reason(self, graph)
","text":"Generate the inferred axioms for a given Graph.
Parameters:
Name Type Description Defaultgraph
Graph
An rdflib graph to execute the reasoner on.
required Source code inontopy/factpluspluswrapper/owlapi_interface.py
def reason(self, graph):\n \"\"\"Generate the inferred axioms for a given Graph.\n\n Args:\n graph (Graph): An rdflib graph to execute the reasoner on.\n\n \"\"\"\n with tempfile.NamedTemporaryFile(\"wt\") as tmpdir:\n graph.serialize(tmpdir.name, format=\"xml\")\n return self._run(tmpdir.name, command=\"--run-reasoner\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.OwlApiInterface.reason_files","title":"reason_files(self, *owl_files)
","text":"Merge the given owl and generate the inferred axioms.
Parameters:
Name Type Description Default*owl_files
os.path
The owl files two merge.
()
Source code in ontopy/factpluspluswrapper/owlapi_interface.py
def reason_files(self, *owl_files):\n \"\"\"Merge the given owl and generate the inferred axioms.\n\n Args:\n *owl_files (os.path): The owl files two merge.\n\n \"\"\"\n return self._run(*owl_files, command=\"--run-reasoner\")\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/owlapi_interface/#ontopy.factpluspluswrapper.owlapi_interface.reason_from_terminal","title":"reason_from_terminal()
","text":"Run the reasoner from terminal.
Source code inontopy/factpluspluswrapper/owlapi_interface.py
def reason_from_terminal():\n \"\"\"Run the reasoner from terminal.\"\"\"\n parser = argparse.ArgumentParser(\n description=\"Run the FaCT++ reasoner on the given OWL file. \"\n \"Catalog files are used to load the import closure. \"\n \"Then the reasoner is executed and the inferred triples are merged \"\n \"with the asserted ones. If multiple OWL files are given, they are \"\n \"merged beforehand\"\n )\n parser.add_argument(\n \"owl_file\", nargs=\"+\", help=\"OWL file(s) to run the reasoner on.\"\n )\n parser.add_argument(\"output_file\", help=\"Path to store inferred axioms to.\")\n\n args = parser.parse_args()\n OwlApiInterface()._run( # pylint: disable=protected-access\n *args.owl_file,\n command=\"--run-reasoner\",\n return_graph=False,\n output_file=args.output_file,\n )\n
"},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/","title":"sync_factpp","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/#ontopy.factpluspluswrapper.sync_factpp--ontopyfactpluspluswrappersyncfatpp","title":"ontopy.factpluspluswrapper.syncfatpp
","text":""},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/#ontopy.factpluspluswrapper.sync_factpp.sync_reasoner_factpp","title":"sync_reasoner_factpp(ontology_or_world=None, infer_property_values=False, debug=1)
","text":"Run FaCT++ reasoner and load the inferred relations back into the owlready2 triplestore.
"},{"location":"api_reference/ontopy/factpluspluswrapper/sync_factpp/#ontopy.factpluspluswrapper.sync_factpp.sync_reasoner_factpp--parameters","title":"Parameters","text":"ontology_or_world : None | Ontology instance | World instance | list Identifies the world to run the reasoner over. infer_property_values : bool Whether to also infer property values. debug : bool Whether to print debug info to standard output.
Source code inontopy/factpluspluswrapper/sync_factpp.py
def sync_reasoner_factpp(\n ontology_or_world=None, infer_property_values=False, debug=1\n):\n \"\"\"Run FaCT++ reasoner and load the inferred relations back into\n the owlready2 triplestore.\n\n Parameters\n ----------\n ontology_or_world : None | Ontology instance | World instance | list\n Identifies the world to run the reasoner over.\n infer_property_values : bool\n Whether to also infer property values.\n debug : bool\n Whether to print debug info to standard output.\n \"\"\"\n # pylint: disable=too-many-locals,too-many-branches,too-many-statements\n if isinstance(ontology_or_world, World):\n world = ontology_or_world\n elif isinstance(ontology_or_world, Ontology):\n world = ontology_or_world.world\n elif isinstance(ontology_or_world, Sequence):\n world = ontology_or_world[0].world\n else:\n world = owlready2.default_world\n\n if isinstance(ontology_or_world, Ontology):\n ontology = ontology_or_world\n elif CURRENT_NAMESPACES.get():\n ontology = CURRENT_NAMESPACES.get()[-1].ontology\n else:\n ontology = world.get_ontology(_INFERRENCES_ONTOLOGY)\n\n locked = world.graph.has_write_lock()\n if locked:\n world.graph.release_write_lock() # Not needed during reasoning\n\n try:\n if debug:\n print(\"*** Prepare graph\")\n # Exclude owl:imports because they are not needed and can\n # cause trouble when loading the inferred ontology\n graph1 = rdflib.Graph()\n for subject, predicate, obj in world.as_rdflib_graph().triples(\n (None, None, None)\n ):\n if predicate != OWL.imports:\n graph1.add((subject, predicate, obj))\n\n if debug:\n print(\"*** Run FaCT++ reasoner (and postprocess)\")\n graph2 = FaCTPPGraph(graph1).inferred_graph()\n\n if debug:\n print(\"*** Load inferred ontology\")\n # Check all rdfs:subClassOf relations in the inferred graph and add\n # them to the world if they are missing\n new_parents = defaultdict(list)\n new_equivs = defaultdict(list)\n entity_2_type = {}\n\n for (\n subject,\n predicate,\n obj,\n ) in graph2.triples( # pylint: disable=not-an-iterable\n (None, None, None)\n ):\n if (\n isinstance(subject, URIRef)\n and predicate in OWL_2_TYPE\n and isinstance(obj, URIRef)\n ):\n s_storid = ontology._abbreviate(str(subject), False)\n p_storid = ontology._abbreviate(str(predicate), False)\n o_storid = ontology._abbreviate(str(obj), False)\n if (\n s_storid is not None\n and p_storid is not None\n and o_storid is not None\n ):\n if predicate in (\n RDFS.subClassOf,\n RDFS.subPropertyOf,\n RDF.type,\n ):\n new_parents[s_storid].append(o_storid)\n entity_2_type[s_storid] = OWL_2_TYPE[predicate]\n else:\n new_equivs[s_storid].append(o_storid)\n entity_2_type[s_storid] = OWL_2_TYPE[predicate]\n\n if infer_property_values:\n inferred_obj_relations = []\n # Hmm, does FaCT++ infer any property values?\n # If not, remove the `infer_property_values` keyword argument.\n raise NotImplementedError\n\n finally:\n if locked:\n world.graph.acquire_write_lock() # re-lock when applying results\n\n if debug:\n print(\"*** Applying reasoning results\")\n\n _apply_reasoning_results(\n world, ontology, debug, new_parents, new_equivs, entity_2_type\n )\n if infer_property_values:\n _apply_inferred_obj_relations(\n world, ontology, debug, inferred_obj_relations\n )\n
"},{"location":"demo/","title":"EMMO use cases","text":"This demo contains two use cases on how EMMO can be used to achieve vertical and horizontal interpoerability, respectivily.
Warning
This demonstration is still work in progress. Especially documentation is lacking.
"},{"location":"demo/#content","title":"Content","text":"Horizontal interoperability is about interoperability between different types of models and codes for a single material (i.e., one use case, multiple models).
The key here is to show how to map between EMMO (or an EMMO-based ontology) and another ontology (possible EMMO-based).
In this example we use a data-driven approach based on a C-implementation of SOFT1,2.
This is done in four steps:
Generate metadata from the EMMO-based user case ontology.
Implemented in the script step1_generate_metadata.py.
Define metadata for an application developed independently of EMMO.
In this case a metadata description of the ASE Atoms class 3 is created in atoms.json
.
Implemented in the script step2_define_metadata.py.
Instantiate the metadata defined defined in step 2 with an atomistic structure interface structure.
Implemented in the script step3_instantiate.py.
Map the atomistic interface structure from the application representation to the common EMMO-based representation.
Implemented in the script step4_map_instance.py.
Essentially, this demonstration shows how EMMO can be extended and how external data can be mapped into our extended ontology (serving as a common representational system).
"},{"location":"demo/horizontal/#requirements-for-running-the-user-case","title":"Requirements for running the user case","text":"In addition to emmo, this demo also requires:
Vertical interoperability is about interoperability across two or more granulaty levels.
In this use case we study the welded interface between an aluminium and a steel plate at three granularity levels. In this case, the granularity levels corresponds to three different length scales, that we here denote component, microstructure and atomistic scale.
"},{"location":"demo/vertical/#creating-an-emmo-based-user-case-ontology","title":"Creating an EMMO-based user case ontology","text":"The script define_ontology.py uses the Python API for EMMO to generate an application ontology extending EMMO with additional concepts needed to describe the data that is exchanged between scales. The user case ontology can then be visualised with the script plot_ontology.py.
"},{"location":"demo/vertical/#defining-the-needed-material-entities","title":"Defining the needed material entities","text":""},{"location":"demo/vertical/#assigning-properties-to-material-entities","title":"Assigning properties to material entities","text":"Note that we here also assign properties to e-bonded_atom
, even though e-bonded_atom
is defined in EMMO.
We choose here to consistently use SI units for all scales (even though at the atomistic scale units like \u00c5ngstr\u00f6m and electron volt are more commonly used).
"},{"location":"demo/vertical/#assigning-types-to-properties","title":"Assigning types to properties","text":"In order to be able to generate metadata and to describe the actual data transferred between scales, we also need to define types.
"},{"location":"demo/vertical/#the-new-application-ontology","title":"The new application-ontology","text":"The final plot shows the user case ontology in context of EMMO.
"},{"location":"developers/release-instructions/","title":"Steps for creating a new release","text":"Create a release on GitHub with a short release description.
Ensure you add a # <version number>
title to the description.
Set the tag to the version number prefixed with \"v\"
and title to the version number as explained above.
Ensure the GitHub Action CD workflows run as expected.
The workflow failed
If something is wrong and the workflow fails before publishing the package to PyPI, make sure to remove all traces of the release and tag, fix the bug, and try again.
If something is wrong and the workflow fails after publishing the package to PyPI: DO NOT REMOVE THE RELEASE OR TAG !
Deployment of the documentation should (in theory) be the only thing that has failed. This can be deployed manually using similar steps as in the workflow.
"},{"location":"developers/setup/","title":"Development environment","text":"This section outlines some suggestions as well as conventions used by the EMMOntoPy developers, which should be considered or followed if one wants to contribute to the package.
"},{"location":"developers/setup/#setup","title":"Setup","text":"Requirements
This section expects you to be running on a Unix-like system (e.g., Linux) with minimum Python 3.7.
"},{"location":"developers/setup/#virtual-environment","title":"Virtual environment","text":"Since development can be messy, it is good to separate the development environment from the rest of your system's environment.
To do this, you can use a virtual environment. There are a several different ways to create a virtual environment, but we recommend using either virtualenv
or venv
.
Virtual environment considerations
There are several different virtual environment setups, here we only address a very few.
A great resource for an overview can be found in this StackOverflow answer. However, note that in the end, it is very subjective on the solution one uses and one is not necessarily \"better\" than another.
virtualenv
(recommended)venv
To install virtualenv
+virtualenvwrapper
run:
$ pip install virtualenvwrapper\n
There is other setup, most of which only needs to be run once. For more information about this, see the virtualenvwrapper
documentation.
After successfully setting up virtualenv
through virtualenvwrapper
, you can create a new virtual environment:
$ mkproject -p python3.7 emmo-python\n
Note
If you do not have Python 3.7 installed (or instead want to use your system's default Python version), you can leave out the extra -p python3.7
argument. Or you can choose to use another version of Python by changing this argument to another (valid) python interpreter.
Then, if the virtual environment has not been activated automatically (you should see the name emmo-python
in a parenthesis in your console), you can run:
$ workon emmo-python\n
Tip
You can quickly see a list of all your virtual environments by writing workon
and pressing Tab twice.
To deactivate the virtual environment, returning to the system/global environment again, run:
(emmo-python) $ deactivate\n
venv
is a built-in package in Python, which works similar to virtualenv
, but with fewer capabilities.
To create a new virtual environment with venv
, first go to the directory, where you desire to keep your virtual environment. Then run the venv
module using the Python interpreter you wish to use in the virtual environment. For Python 3.7 this would look like the following:
$ python3.7 -m venv emmo-python\n
A folder with the name emmo-python
containing the environment is created.
To activate the environment run:
$ ./emmo-python/activate\n
or
$ /path/to/emmo-python/activate\n
You should now see the name emmo-python
in a parenthesis in your console, letting you know you have activated and are currently using the emmo-python
virtual environment.
To deactivate the virtual environment, returning to the system/global environment again, run:
(emmo-python) $ deactivate\n
Expectation
From here on, all commands expect you to have activated your virtual environment, if you are using one, unless stated otherwise.
"},{"location":"developers/setup/#installation","title":"Installation","text":"To install the package, please do not install from PyPI. Instead you should clone the repository from GitHub:
$ git clone https://github.com/emmo-repo/EMMOntoPy.git\n
or, if you are using an SSH connection to GitHub, you can instead clone via:
$ git clone git@github.com:emmo-repo/EMMOntoPy.git\n
Then enter into the newly cloned EMMOntoPy
directory (cd EMMOntoPy
) and run:
$ pip install -U -e .[dev]\n$ pre-commit install\n
This will install the EMMOntoPy Python package, including all dependencies and requirements for building and serving (locally) the documentation and running unit tests.
The second line installs the pre-commit
hooks defined in the .pre-commit-config.yaml
file. pre-commit
is a tool that runs immediately prior to you creating new commits (git commit
), and checks all the changes, automatically updates the API reference in the documentation and much more. Mainly, it helps to ensure that the package stays nicely formattet, safe, and user-friendly for developers.
There are a few non-Python dependencies that EMMOntoPy relies on as well. These can be installed by running (on a Debian system):
$ sudo apt-get update && sudo apt-get install -y graphviz openjdk-11-jre-headless\n
If you are on a non-Debian system (Debian, Ubuntu, ...), please check which package manager you are using and find packages for graphviz
and openjdk
minimum version 11.
It is good practice to test the integrity of the installation and that all necessary dependencies are correctly installed.
You can run unit tests, to check the integrity of the Python functionality, by running:
$ pytest\n
If all has installed and is running correctly, you should not have any failures, but perhaps some warnings (deprecation warnings) in the test summary.
"},{"location":"developers/testing/","title":"Testing and tooling","text":""},{"location":"developers/testing/#unit-testing","title":"Unit testing","text":"The PyTest framework is used for testing the EMMOntoPy package. It is a unit testing framework with a plugin system, sporting an extensive plugin library as well as a sound fixture injection system.
To run the tests locally install the package with the dev
extra (see the developer's setup guide) and run:
$ pytest\n=== test session starts ===\n...\n
To understand what options you have, run pytest --help
.
Several tools are used to maintain the package, keeping it secure, readable, and easing maintenance.
"},{"location":"developers/testing/#mypy","title":"Mypy","text":"Mypy is a static type checker for Python.
Documentation: mypy.readthedocs.io
The signs of this tool will be found in the code especially through the typing.TYPE_CHECKING
boolean variable, which will be used in the current way:
from typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import List\n
Since TYPE_CHECKING
is False
at runtime, the if
-block will not be run as part of running the script or module or if importing the module. However, when Mypy runs to check the static typing, it forcefully runs these blocks, considering TYPE_CHECKING
to be True
(see the typing.TYPE_CHECKING
section in the Mypy documentation).
This means the imports in the if
-block are meant to only be used for static typing, helping developers to understand the intention of the code as well as to check the invoked methods make sense (through Mypy).
This directory contains the needed templates, introductory text and figures for generating the full EMMO documentation using ontodoc
. Since the introduction is written in markdown, pandoc is required for both pdf and html generation.
For a standalone html documentation including all inferred relations, enter this directory and run:
ontodoc --template=emmo.md --format=html emmo-inferred emmo.html\n
Pandoc options may be adjusted with the files pandoc-options.yaml and pandoc-html-options.yaml.
Similarly, for generating pdf documentation, enter this directory and run:
ontodoc --template=emmo.md emmo-inferred emmo.pdf\n
By default, we have configured pandoc to use xelatex for better unicode support. It is possible to change these settings in pandoc-options.yaml and pandoc-pdf-options.yaml.
"},{"location":"examples/emmodoc/#content-of-this-directory","title":"Content of this directory","text":""},{"location":"examples/emmodoc/#ontodoc-templates-with-introductory-text-and-document-layout","title":"ontodoc
templates with introductory text and document layout","text":"pandoc
configuration files","text":"For simple html documentation, you can skip all input files and simply run ontodoc
as
ontodoc --format=simple-html YOUR_ONTO.owl YOUR_ONTO.html\n
It is also possible to include ontodoc templates using the --template
option for adding additional information and structure the document. In this case the template may only contain ontodoc
pre-processer directives and inline html, but not markdown.
In order to produce output in pdf (or any other output format supported by pandoc), you can write your ontodoc
template in markdown (with ontodoc
pre-processer directives) and follow these steps to get started:
pandoc-
to a new directory.input-files
to the name of your new yaml metadata file.logo
to the path of your logo (or remove it).titlegraphic
to the path of your title figure (or remove it).ontodoc
template files with additional information about your ontology and document layout.That should be it. Good luck!
"},{"location":"examples/emmodoc/classes/","title":"Classes","text":"%% %% This file %% This is Markdown file, except of lines starting with %% will %% be stripped off. %%
%HEADER \"EMMO Classes\" level=1
emmo is a class representing the collection of all the individuals (signs) that are used in the ontology. Individuals are declared by the EMMO users when they want to apply the EMMO to represent the world.
%BRANCHHEAD EMMO The root of all classes used to represent the world. It has two children; collection and item.
collection is the class representing the collection of all the individuals (signs) that represents a collection of non-connected real world objects.
item Is the class that collects all the individuals that are members of a set (it's the most comprehensive set individual). It is the branch of mereotopology.
%% - based on has_part mereological relation that can be axiomatically defined %% - a fusion is the sum of its parts (e.g. a car is made of several %% mechanical parts, an molecule is made of nuclei and electrons) %% - a fusion is of the same entity type as its parts (e.g. a physical %% entity is made of physical entities parts) %% - a fusion can be partitioned in more than one way %BRANCH EMMO
%BRANCHDOC Elementary %BRANCHDOC Perspective
%BRANCHDOC Holistic %BRANCHDOC Semiotics %BRANCHDOC Sign %BRANCHDOC Interpreter %BRANCHDOC Object %BRANCHDOC Conventional %BRANCHDOC Property %BRANCHDOC Icon %BRANCHDOC Process
%BRANCHDOC Perceptual %BRANCHDOC Graphical %BRANCHDOC Geometrical %BRANCHDOC Symbol %BRANCHDOC Mathematical %BRANCHDOC MathematicalSymbol %BRANCHDOC MathematicalModel %BRANCHDOC MathematicalOperator %BRANCHDOC Metrological %BRANCHDOC PhysicalDimension rankdir=RL %BRANCHDOC PhysicalQuantity %BRANCHDOC Number %BRANCHDOC MeasurementUnit %BRANCHDOC UTF8 %BRANCHDOC SIBaseUnit %BRANCHDOC SISpecialUnit rankdir=RL %BRANCHDOC PrefixedUnit %BRANCHDOC MetricPrefix rankdir=RL %BRANCHDOC Quantity %BRANCHDOC BaseQuantity %BRANCHDOC DerivedQuantity rankdir=RL %BRANCHDOC PhysicalConstant
%BRANCHDOC Reductionistic %BRANCHDOC Expression
%BRANCHDOC Physicalistic %BRANCHDOC ElementaryParticle
"},{"location":"examples/emmodoc/classes/#branchdoc-subatomic","title":"%BRANCHDOC Subatomic","text":"%BRANCHDOC Matter %BRANCHDOC Fluid %BRANCHDOC Mixture %BRANCHDOC StateOfMatter
"},{"location":"examples/emmodoc/emmo/","title":"Emmo","text":"%% %% This is the main Markdown input file for the EMMO documentation. %% %% Lines starting with a % are pre-processor directives. %%
%INCLUDE introduction.md
%INCLUDE relations.md
%INCLUDE classes.md
%HEADER Individuals level=1 %ALL individuals
%HEADER Appendix level=1
%HEADER \"The complete taxonomy of EMMO relations\" level=2 %BRANCHFIG EMMORelation caption='The complete taxonomy of EMMO relations.' terminated=0 relations=all edgelabels=0
%HEADER \"The taxonomy of EMMO classes\" level=2 %BRANCHFIG EMMO caption='The almost complete taxonomy of EMMO classes. Only physical quantities and constants are left out.' terminated=0 relations=isA edgelabels=0 leaves=PhysicalDimension,BaseQuantity,DerivedQuantity,ExactConstant,MeasuredConstant,SIBaseUnit,SISpecialUnit,MetricPrefix,UTF8
"},{"location":"examples/emmodoc/important_concepts/","title":"Important concepts","text":""},{"location":"examples/emmodoc/important_concepts/#important-concepts","title":"Important concepts","text":""},{"location":"examples/emmodoc/important_concepts/#mereotopological-composition","title":"Mereotopological composition","text":""},{"location":"examples/emmodoc/important_concepts/#substrate","title":"Substrate","text":"A substrate
represents the place (in general sense) in which every real world item exists. It provides the dimensions of existence for real world entities. This follows from the fact that everything that exists is placed somewhere in space and time. Hence, its space and time coordinates can be used to identify it.
Substrates are always topologically connected spaces. A topological space, X, is said to be disconnected if it is the union of two disjoint non-empty open sets. Otherwise, X is said to be connected.
substrate
is the superclass of space
, time
and their combinations, like spacetime
.
Following Kant, space and time are a priori forms of intuition, i.e. they are the substrate upon which we place our intuitions, assigning space and time coordinates to them.
"},{"location":"examples/emmodoc/important_concepts/#hybrid","title":"Hybrid","text":"A hybrid
is the combination of space
and time
. It has the subclasses world_line
(0D space + 1D time), world_sheet
(1D space + 1D time), world_volume
(2D space + 1D time) and spacetime
(3D space + 1D time).
EMMO represents real world entities as subclasses of spacetime
. A spacetime
is valid for all reference systems (as required by the theory of relativity).
matter
is used to represent a group of elementary
in an enclosing spacetime
. As illustrated in the figure, a matter
is an elementary
or a composition of other matter
and vacuum
.
In EMMO matter
is always a 4D spacetime. This is a fundamental difference between EMMO and most other ontologies.
In order to describe the real world, we must also take into account the vacuum between the elementaries that composes higher granularity level entity (e.g. an atom).
In EMMO vacuum
is defined as a spacetime
that has no elementary
parts.
An existent
is defined as a matter
that unfolds in time as a succession of states. It is used to represent the whole life of a complex but structured state-changing matter
entity, like e.g. an atom that becomes ionised and then recombines with an electron.
On the contrary, a matter and not existent
entity is something \"amorphous\", randomly collected and not classifiable by common terms or definitions. That is a heterogeneous heap of elementary
, appearing and disappearing in time.
A state
is matter in a particular configurational state. It is defined as having spatial direct parts that persist (do not change) throughout the lifetime of the state
. Hence, a state
is like a snapshot of a physical in a finite time interval.
The use of spatial direct parthood in the definition of state
means that a state
cannot overlap in space with another state
.
An important feature of states, that follows from the fact that they are spacetime
, is that they constitute a finite time interval.
The basic assumption of decomposition in EMMO, is that the most basic manifestation of matter
is represented by a subclass of spacetime
called elementary
.
The elementary
class defines the \"atomic\" (undividable) level in EMMO. A generic matter
can always be decomposed in proper parts down to the elementary
level using proper parthood. An elementary
can still be decomposed in temporal parts, that are themselves elementary
.
Example of elementaries are electrons, photons and quarks.
"},{"location":"examples/emmodoc/important_concepts/#granularity-direct-parthood","title":"Granularity - direct parthood","text":"Granularity is a central concept of EMMO, which allows the user to percieve the world at different levels of detail (granularity) that follow physics and materials science perspectives.
Every material in EMMO is placed on a granularity level and the ontology gives information about the direct upper and direct lower level classes. This is done with the non-transitive is_direct_part_of
relation.
Granularity is a defined class and is useful sine a reasoner automatically can put the individuals defined by the user under a generic class that clearly expresses the types of its compositional parts.
"},{"location":"examples/emmodoc/important_concepts/#mathematical-entities","title":"Mathematical entities","text":"The class mathematical_entity
represents fundamental elements of mathematical expressions, like numbers, variables, unknowns and equations. Mathematical entities are pure mathematical and have no physical unit.
A natural_law
is an abstraction for a series of experiments that tries to define a common cause and effect of the time evolution of a set of interacting participants. It is (by definition) a pre-mathematical entity.
The natural_law
class is defined as
is_abstraction_for some experiment\n
It can be represented e.g. as a thought in the mind of the experimentalist, a sketch and textual description in a book of science.
physical_law
and material_law
are, according to the RoMM and CWA, the laws behind physical equations and material relations, respectively.
Properties are abstracts that are related to a specific material entity with the relation has_property, but that depend on a specific observation process, participated by a specific observer, who catch the physical entity behaviour that is abstracted as a property.
Properties enable us to connect a measured property to the measurement process and the measurement instrument.
"},{"location":"examples/emmodoc/introduction/","title":"Introduction","text":"EMMO is a multidisciplinary effort to develop a standard representational framework (the ontology) based on current materials modelling knowledge, including physical sciences, analytical philosophy and information and communication technologies. This multidisciplinarity is illustrated by the figure on the title page. It provides the connection between the physical world, materials characterisation world and materials modelling world.
EMMO is based on and is consistent with the Review of Materials Modelling, CEN Workshop Agreement and MODA template. However, while these efforts are written for humans, EMMO is defined using the Web Ontology Language (OWL), which is machine readable and allows for machine reasoning. In terms of semantic representation, EMMO brings everything to a much higher level than these foundations.
As illustrated in the figure below, EMMO covers all aspects of materials modelling and characterisation, including:
EMMO is released under the Creative Commons license and is available at emmo.info/. The OWL2-DL sources are available in RDF/XML format.
"},{"location":"examples/emmodoc/introduction/#what-is-an-ontology","title":"What is an ontology","text":"In short, an ontology is a specification of a conceptualization. The word ontology has a long history in philosophy, in which it refers to the subject of existence. The so-called ontological argument for the existence of God was proposed by Anselm of Canterbury in 1078. He defined God as \"that than which nothing greater can be thought\", and argued that \"if the greatest possible being exists in the mind, it must also exist in reality. If it only exists in the mind, then an even greater being must be possible -- one which exists both in the mind and in reality\". Even though this example has little to do with todays use of ontologies in e.g. computer science, it illustrates the basic idea; the ontology defines some basic premises (concepts and relations between them) from which it is possible reason to gain new knowledge.
For a more elaborated and modern definition of the ontology we refer the reader to the one provided by Tom Gruber (2009). Another useful introduction to ontologies is the paper Ontology Development 101: A Guide to Creating Your First Ontology by Noy and McGuinness (2001), which is based on the Protege sortware, with which EMMO has been developed.
A taxonomy is a hierarchical representation of classes and subclasses connected via is_a
relations. Hence, it is a subset of the ontology excluding all but the is_a
relations. The main use of taxonomies is for the organisation of classifications. The figure shows a simple example of a taxonomy illustrating a categorisation of four classes into a hierarchy of more higher of levels of generality.
In EMMO, the taxonomy is a rooted directed acyclic graph (DAG). This is important since many classification methods relies on this property, see e.g. Valentini (2014) and Robison et al (2015). Note, that EMMO is a DAG does not prevent some classes from having more than one parent. A Variable
is for instance both a Mathematical
and a Symbol
. See appendix for the full EMMO taxonomy.
Individuals are the basic, \"ground level\" components of EMMO. They may include concrete objects such as cars, flowers, stars, persons and molecules, as well as abstract individuals such as a measured height, a specific equation and software programs.
Individuals possess attributes in form of axioms that are defined by the user (interpreter) upon declaration.
"},{"location":"examples/emmodoc/introduction/#classes","title":"Classes","text":"Classes represent concepts. They are the building blocks that we use to create an ontology as a representation of knowledge. We distinguish between defined and non-defined classes.
Defined classes are defined by the requirements for being a member of the class. In the graphical representations of EMMO, defined classes are orange. For instance, in the graph of the top-level entity branch below, The root EMMO
and a defined class (defined to be the disjoint union of Item
and Collection
).
Non-defined classes are defined as an abstract group of objects, whose members are defined as belonging to the class. They are yellow in the graphical representations.
%BRANCHFIG EMMO leaves=Perspective,Elementary caption='Example of the top-level branch of EMMO showing some classes and relationships between them.' width=460
"},{"location":"examples/emmodoc/introduction/#axioms","title":"Axioms","text":"Axioms are propositions in a logical framework that define the relations between the individuals and classes. They are used to categorise individuals in classes and to define the defined classes.
The simplest form of a class axiom is a class description that just states the existence of the class and gives it an unique identifier. In order to provide more knowledge about the class, class axioms typically contain additional components that state necessary and/or sufficient characteristics of the class. OWL contains three language constructs for combining class descriptions into class axioms:
Subclass (rdfs:subClassOf
) allows one to say that the class extension of a class description is a subset of the class extension of another class description.
Equivalence (owl:equivalentClass
) allows one to say that a class description has exactly the same class extension (i.e. the individuals associated with the class) as another class description.
Distjointness (owl:disjointWith
) allows one to say that the class extension of a class description has no members in common with the class extension of another class description.
See the section about Description logic for more information about these language constructs. Axioms are also used to define relations between relations. These are further detailed in the chapter on Relations.
"},{"location":"examples/emmodoc/introduction/#theoretical-foundations","title":"Theoretical foundations","text":"EMMO build upon several theoretical frameworks.
"},{"location":"examples/emmodoc/introduction/#semiotics","title":"Semiotics","text":"Semiotics is the study of meaning-making. It is the dicipline of formulating something that possibly can exist in a defined space and time in the real world.
%%It is introdused in EMMO via the %%semion
class and used as a way to reduce the complexity of a %%physical to a simple sign (symbol). A Sign
is a physical %%entity that can represent another object. %% %%### Set theory %%Set theory is the theory of membership. This is introduced via %%the set
class, representing the collection of all individuals %%(signs) that represent a collection of items. Sets are defined %%via the hasMember
relations.
Mereotopology is the combination of mereology (science of parthood) and topology (mathematical study of the geometrical properties and conservation through deformations). It is introdused via the Item
class and based on the mereotopological
relations. Items in EMMO are always topologically connected in space and time. EMMO makes a strong distinction between membership and parthood relations. In contrast to collections, items can only have parts that are themselves items. For further information, see Casati and Varzi \"Parts and Places\" (1999).
EMMO is strongly based on physics, with the aim of being able to describe all aspects and all domains of physics, from quantum mechanics to continuum, engeneering, chemistry, etc. EMMO is compatible with both the De Broglie - Bohm and the Copenhagen interpretation of quantum mecanics (see Physical
for more comments).
EMMO defines a physics-based parthood hierachy under Physical
by introducing the following concepts (illustrated in the figure below):
Elementary
is the fundamental, non-divisible constituent of entities. In EMMO, elementaries are based on the standard model of physics.
State
is a Physical
whose parts does not change during its life time (at the chosen level of granularity). This is consistent with a state within e.g. thermodynamics.
Existent
is a succession of states.
Metrology is the science of measurements. It introduces units and links them to properties. The description of metrology in EMMO is based on the standards of International System of Quantities (ISQ) and International System of Units (SI).
"},{"location":"examples/emmodoc/introduction/#description-logic","title":"Description logic","text":"Description logic (DL) is a formal knowledge representation language in which the axioms are expressed. It is less expressive than first-order logic (FOL), but commonly used for providing the logical formalism for ontologies and semantic web. EMMO is expressed in the Web Ontology Language (OWL), which in turn is based on DL. This brings along features like reasoning.
Since it is essential to have a basic notion of OWL and DL, we include here a very brief overview. For a proper introduction to OWL and DL, we refer the reader to sources like Grau et.al. (2008), OWL2 Primer and OWL Reference.
OWL distinguishes between six types of class descriptions:
owl:oneOf
);owl:someValuesFrom
, owl:allValuesFrom
, owl:hasValue
, owl:cardinality
, owl:minCardinality
, owl:maxCardinality
);owl:intersectionOf
);owl:unionOf
); andowl:complementOf
).Except for the first, all of these refer to defined classes. The table below shows the notation in OWL, DL and the Manchester OWL syntax, all commonly used for the definitions. The Manchester syntax is used by Protege and is designed to not use DL symbols and to be easy and quick to read and write. Several other syntaxes exist for DL. An interesting example is the pure Python syntax proposed by Lamy (2017), which is used in the open source Owlready2 Python package. The Python API for EMMO is also based on Owlready2.
DL Manchester Python + Owlready2 Read Meaning --------------- ----------------- ------------------- ------------------- -------------------- Constants
$\\top$ Thing top A special class with every individual as an instance
$\\bot$ Nothing bottom The empty class
Axioms
$A\\doteq B$ A is defined to be Class definition equal to B
$A\\sqsubseteq B$ A subclass_of B class A(B): ... all A are B Class inclusion
issubclass(A, B) Test for *inclusion*\n
$A\\equiv B$ A equivalent_to B A.equivalent_to.append(B) A is equivalent to B Class equivalence
B in A.equivalent_to Test for equivalence\n
$a:A$ a is_a A a = A() a is a A Class assertion (instantiation)
isinstance(a, A) Test for instance of\n
$(a,b):R$ a object property a.R.append(b) a is R-related to b Property assertion assertion b
$(a,n):R$ a data property a.R.append(n) a is R-related to n Data assertion assertion n
Constructions
$A\\sqcap B$ A and B A & B A and B Class intersection (conjunction)
$A\\sqcup B$ A or B A | B A or B Class union (disjunction)
$\\lnot A$ not A Not(A) not A Class complement (negation)
${a, b, ...}$ {a, b, ...} OneOf([a, b, ...]) one of a, b, ... Class enumeration
$S\\equiv R^-$ S inverse_of R Inverse(R) S is inverse of R Property inverse
S.inverse == R Test for *inverse*\n
$\\forall R.A$ R only A R.only(A) all A with R Universal restriction
$\\exists R.A$ R some A R.some(A) some A with R Existential restriction
$=n R.A$ R exactly n A R.exactly(n, A) Cardinality restriction
$\\leq n R.A$ R min n A R.min(n, A) Minimum cardinality restriction
$\\geq n R.A$ R max n A R.max(n, A) Minimum cardinality restriction
$\\exists R{a}$ R value a R.value(a) Value restriction
Decompositions
$A\\sqcup B A disjoint with B AllDisjoint([A, B]) A disjoint with B Disjoint \\sqsubseteq\\bot$
B in A.disjoints() Test for disjointness\n
$\\exists R.\\top R domain A R.domain = [A] Classes that the restriction applies to \\sqsubseteq A$
$\\top\\sqsubseteq R range B R.range = [B] All classes that can be the value of the restriction \\forall R.B$
Table: Notation for DL and Protege. A and B are classes, R is an active relation, S is an passive relation, a and b are individuals and n is a literal. Inspired by the Great table of Description Logics.
"},{"location":"examples/emmodoc/introduction/#examples","title":"Examples","text":"Here are some examples of different class descriptions using both the DL and Manchester notation.
"},{"location":"examples/emmodoc/introduction/#equivalence-owlequivalentto","title":"Equivalence (owl:equivalentTo
)","text":"Equivalence ($\\equiv$) defines necessary and sufficient conditions.
Parent is equivalent to mother or father
DL: parent
$\\equiv$ mother
$\\lor$ father
Manchester: parent equivalent_to mother or father
rdf:subclassOf
)","text":"Inclusion ($\\sqsubseteq$) defines necessary conditions.
An employee is a person.
DL: employee
$\\sqsubseteq$ person
Manchester: employee is_a person
owl:oneOf
)","text":"The color of a wine is either white, rose or red:
DL: wine_color
$\\equiv$ {white
, rose
, red
}
Manchester: wine_color equivalent_to {white, rose, red}
owl:someValuesFrom
)","text":"A mother is a woman that has a child (some person):
DL: mother
$\\equiv$ woman
$\\sqcap$ $\\exists$has_child
.person
Manchester: mother equivalent_to woman and has_child some person
owl:allValuesFrom
)","text":"All parents that only have daughters:
DL: parents_with_only_daughters
$\\equiv$ person
$\\sqcap$ $\\forall$has_child
.woman
Manchester: parents_with_only_daughters equivalent_to person and has_child only woman
owl:hasValue
)","text":"The owl:hasValue restriction allows to define classes based on the existence of particular property values. There must be at least one matching property value.
All children of Mary:
DL: Marys_children
$\\equiv$ person
$\\sqcap$ $\\exists$has_parent
.{Mary
}
Manchester: Marys_children equivalent_to person and has_parent value Mary
owl:cardinality
)","text":"The owl:cardinality restrictions ($\\geq$, $\\leq$ or $\\equiv$) allow to define classes based on the maximum (owl:maxCardinality), minimum (owl:minCardinality) or exact (owl:cardinality) number of occurences.
A person with one parent:
DL: half_orphant
$\\equiv$ person
and =1has_parent
.person
Manchester: half_orphant equivalent_to person and has_parent exactly 1 person
owl:intersectionOf
)","text":"Individuals of the intersection ($\\sqcap$) of two classes, are simultaneously instances of both classes.
A man is a person that is male:
DL: man
$\\equiv$ person
$\\sqcap$ male
Manchester: man equivalent_to person and male
owl:unionOf
)","text":"Individuals of the union ($\\sqcup$) of two classes, are either instances of one or both classes.
A person is a man or woman:
DL: person
$\\equiv$ man
$\\sqcup$ woman
Manchester: person equivalent_to man or woman
owl:complementOf
)","text":"Individuals of the complement ($\\lnot$) of a class, are all individuals that are not member of the class.
Not a man:
DL: female
$\\equiv$ $\\lnot$ male
Manchester: female equivalent_to not male
The EMMO ontology is structured in shells, expressed by specific ontology fragments, that extends from fundamental concepts to the application domains, following the dependency flow.
"},{"location":"examples/emmodoc/introduction/#top-level","title":"Top Level","text":"The EMMO top level is the group of fundamental axioms that constitute the philosophical foundation of the EMMO. Adopting a physicalistic/nominalistic perspective, the EMMO defines real world objects as 4D objects that are always extended in space and time (i.e. real world objects cannot be spaceless nor timeless). For this reason abstract objects, i.e. objects that does not extend in space and time, are forbidden in the EMMO.
EMMO is strongly based on the analytical philosophy dicipline semiotic. The role of abstract objects are in EMMO fulfilled by semiotic objects, i.e. real world objects (e.g. symbol or sign) that stand for other real world objects that are to be interpreted by an agent. These symbols appear in actions (semiotic processes) meant to communicate meaning by establishing relationships between symbols (signs).
Another important building block of from analytical philosophy is atomistic mereology applied to 4D objects. The EMMO calls it 'quantum mereology', since the there is a epistemological limit to how fine we can resolve space and time due to the uncertanity principles.
The mereotopology module introduces the fundamental mereotopological concepts and their relations with the real world objects that they represent. The EMMO uses mereotopology as the ground for all the subsequent ontology modules. The concept of topological connection is used to define the first distinction between ontology entities namely the Item and Collection classes. Items are causally self-connected objects, while collections are causally disconnected. Quantum mereology is represented by the Quantum class. This module introduces also the fundamental mereotopological relations used to distinguish between space and time dimensions.
The physical module, defines the Physical objects and the concept of Void that plays a fundamental role in the description of multiscale objects and quantum systems. It also define the Elementary class, that restricts mereological atomism in space.
In EMMO, the only univocally defined real world object is the Item individual called Universe that stands for the universe. Every other real world object is a composition of elementaries up to the most comprehensive object; the Universe. Intermediate objects are not univocally defined, but their definition is provided according to some specific philosophical perspectives. This is an expression of reductionism (i.e. objects are made of sub-objects) and epistemological pluralism (i.e. objects are always defined according to the perspective of an interpreter, or a class of interpreters).
The Perspective class collects the different ways to represent the objects that populate the conceptual region between the elementary and universe levels.
"},{"location":"examples/emmodoc/introduction/#middle-level","title":"Middle Level","text":"The middle level ontologies act as roots for extending the EMMO towards specific application domains.
The Reductionistic perspective class uses the fundamental non-transitive parthood relation, called direct parthood, to provide a powerful granularity description of multiscale real world objects. The EMMO can in principle represents the Universe with direct parthood relations as a direct rooted tree up to its elementary constituents.
The Phenomenic perspective class introduces the concept of real world objects that express of a recognisable pattern in space or time that impress the user. Under this class the EMMO categorises e.g. formal languages, pictures, geometry, mathematics and sounds. Phenomenic objects can be used in a semiotic process as signs.
The Physicalistic perspective class introduces the concept of real world objects that have a meaning for the under applied physics perspective.
The Holistic perspective class introduces the concept of real world objects that unfold in time in a way that has a meaning for the EMMO user, through the definition of the classes Process and Participant. The semiotics module introduces the concepts of semiotics and the Semiosis process that has a Sign, an Object and an Interpreter as participants. This forms the basis in EMMO to represent e.g. models, formal languages, theories, information and properties.
"},{"location":"examples/emmodoc/introduction/#emmo-relations","title":"EMMO relations","text":"All EMMO relations are subrelations of the relations found in the two roots: mereotopological and semiotical. The relation hierarchy extends more vertically (i.e. more subrelations) than horizontally (i.e. less sibling relations), facilitating the categorisation and inferencing of individuals. See also the chapter EMMO Relations.
Imposing all relations to fall under mereotopology or semiotics is how the EMMO force the developers to respect its perspectives. Two entities are related only by contact or parthood (mereotopology) or by standing one for another (semiosis): no other types of relation are possible within the EMMO.
A unique feature in EMMO, is the introduction of direct parthood. As illustrated in the figure below, it is a mereological relation that lacks transitivity. This makes it possible to entities made of parts at different levels of granularity and to go between granularity levels in a well-defined manner. This is paramount for cross scale interoperability. Every material in EMMO is placed on a granularity level and the ontology gives information about the direct upper and direct lower level classes using the non-transitive direct parthood relations.
"},{"location":"examples/emmodoc/introduction/#annotations","title":"Annotations","text":"All entities and relations in EMMO have some attributes, called annotations. In some cases, only the required International Resource Identifier (IRI) and relations are provided. However, descriptive annotations, like elucidation and comment, are planned to be added for all classes and relations. Possible annotations are:
%%### Graphs %%The generated graphs borrow some syntax from the Unified Modelling %%Language (UML), which is a general purpose language for software %%design and modelling. The table below shows the style used for the %%different types of relations and the concept they correspond to in %%UML. %% %%Relation UML arrow UML concept %%------------- ----------- ----------- %%is-a ![img][isa] inheritance %%disjoint_with ![img][djw] association %%equivalent_to ![img][eqt] association %%encloses ![img][rel] aggregation %%has_abstract_part ![img][rel] aggregation %%has_abstraction ![img][rel] aggregation %%has_representation ![img][rel] aggregation %%has_member ![img][rel] aggregation %%has_property ![img][rel] aggregation %% %%Table: Notation for arrow styles used in the graphs. Only active %%relations are listed. Corresponding passive relations use the same %%style. %% %%[isa]: figs/arrow-is_a.png \"inheritance\" %%[djw]: figs/arrow-disjoint_with.png \"association\" %%[eqt]: figs/arrow-equivalent_to.png \"association\" %%[rel]: figs/arrow-relation.png \"aggregation\"
%%All relationships have a direction. In the graphical visualisations, %%the relationships are represented with an arrow pointing from the %%subject to the object. In order to reduce clutter and limit the size %%of the graphs, the relations are abbreviated according to the %%following table: %% %%Relation Abbreviation %%-------- ------------ %%has_part only hp-o %%is_part_of only ipo-o %%has_member some hm-s %%is_member_of some imo-s %%has_abstraction some ha-s %%is_abstraction_of some iao-s %%has_abstract_part only pap-o %%is_abstract_part_of only iapo-o %%has_space_slice some hss-s %%is_space_slice_of some isso-s %%has_time_slice some hts-s %%is_time_slice_of some itso-s %%has_projection some hp-s %%is_projection_of some ipo-s %%has_proper_part some hpp-s %%is_proper_part_of some ippo-s %%has_proper_part_of some hppo-s %%has_spatial_direct_part min hsdp-m %%has_spatial_direct_part some hsdp-s %%has_spatial_direct_part exactly hsdp-e %% %%Table: Abbriviations of relations used in the graphical representation %%of the different subbranches. %% %% %%UML represents classes as a box with three compartments; names, attributes %%and operators. However, since the classes in EMMO have no operators and %%since it gives little meaning to include the OWL annotations as attributes, %%we simply represent the classes as boxes by a name. %% %%As already mentioned, defined classes are colored orange, while %%undefined classes are yellow. %% %% %%
"},{"location":"examples/emmodoc/relations/","title":"Relations","text":"%% %% This file %% This is Markdown file, except of lines starting with %% will %% be stripped off. %%
%HEADER \"EMMO Relations\" level=1
In the language of OWL, relations are called properties. However, since relations describe relations between classes and individuals and since properties has an other meaning in EMMO, we only call them relations.
Resource Description Framework (RDF) is a W3C standard that is widely used for describing informations on the web and is one of the standards that OWL builds on. RDF expresses information in form of subject-predicate-object triplets. The subject and object are resources (aka items to describe) and the predicate expresses a relationship between the subject and the object.
In OWL are the subject and object classes or individuals (or data) while the predicate is a relation. An example of an relationship is the statement dog is_a animal. Here dog
is the subject, is_a
the predicate and animal
the object.
%%We distinguish between %%active relations
where the subject is acting on the object and %%passive relations
where the subject is acted on by the object.
OWL distingues between object properties, that link classes or individuals to classes or individuals, and data properties that link individuals to data values. Since EMMO only deals with classes, we will only be discussing object properties. However, in actual simulation or characterisation applications build on EMMO, datatype propertyes will be important.
The characteristics of the different properties are described by the following property axioms:
rdf:subPropertyOf
is used to define that a property is a subproperty of some other property. For instance, in the figure below showing the relation branch, we see that active_relation
is a subproperty or relation
. The rdf:subPropertyOf
axioms forms a taxonomy-like tree for relations.
owl:equivalentProperty
states that two properties have the same property extension.
owl:inverseOf
axioms relate active relations to their corresponding passive relations, and vice versa. The root relation relation
is its own inverse.
owl:FunctionalProperty
is a property that can have only one (unique) value y for each instance x, i.e. there cannot be two distinct values y1 and y2 such that the pairs (x,y1) and (x,y2) are both instances of this property. Both object properties and datatype properties can be declared as \"functional\".
owl:InverseFunctionalProperty
.
owl:TransitiveProperty
states that if a pair (x,y) is an instance of P, and the pair (y,z) is instance of P, then we can infer that the pair (x,z) is also an instance of P.
owl:SymmetricProperty
states that if the pair (x,y) is an instance of P, then the pair (y,x) is also an instance of P. A popular example of a symmetric property is the siblingOf
relation.
rdfs:domain
specifies which classes the property applies to. Or said differently, the valid values of the subject in a subject-predicate-object triplet.
rdfs:range
specifies the property extension, i.e. the valid values of the object in a subject-predicate-object triplet.
%HEADER \"Root of EMMO relations\" level=2 %BRANCHFIG EMMORelation caption=\"Top-level of the EMMO relation hierarchy.\" %ENTITY EMMORelation
"},{"location":"examples/emmodoc/relations/#branchdoc-mereotopological","title":"%%BRANCHDOC mereotopological","text":""},{"location":"examples/emmodoc/relations/#branchhead-mereotopological","title":"%BRANCHHEAD mereotopological","text":""},{"location":"examples/emmodoc/relations/#branch-mereotopological","title":"%BRANCH mereotopological","text":""},{"location":"examples/emmodoc/relations/#branchdoc-connected","title":"%BRANCHDOC connected","text":"%BRANCHDOC hasPart
%BRANCHDOC semiotical
"},{"location":"examples/jupyter-visualization/","title":"Visualise an ontology using pyctoscape in Jupyter Notebook","text":""},{"location":"examples/jupyter-visualization/#installation-instructions","title":"Installation instructions","text":"In a terminal, run:
cd /path/to/env/dirs\npython -m venv cytopy # cytopy is my name, you can choose what ouy want\nsource cytopy/bin/activate\ncd /dir/to/EMMOntoPy/\npip install -e .\npip install jupyterlab\npython -m ipykernel install --user --name=cytopy\npip install ipywidgets\npip install nodejs # Note requires that node.js and npm has already been isntalled!\npip install ipycytoscape pydotplus networkx\npip install --upgrade setuptools\njupyter labextension install @jupyter-widgets/jupyterlab-manager\n
"},{"location":"examples/jupyter-visualization/#test-the-notebook","title":"Test the notebook","text":"In a terminal, run:
jupyter-lab\n
That should start jupyter kernel and open a new tab in your browser. In the side pane, select team40.ipynb
and run the notebook.
This directory contains an example xlsx-file for how to document ontology entities (classes, object properties, annotation properties and data properties) in an Excel workbook. This workbook can then be used to generate a new ontology or update an already existing ontology with new entities (existing entities are not updated).
Please refer to the (documentation)[https://emmo-repo.github.io/EMMOntoPy/latest/api_reference/ontopy/excelparser/] for full explanation of capabilities.
The file tool/onto.xlsx
contains examples on how to do things correctly as well as incorrectly. The tool will by default exit without generating the ontology if it detects concepts defined incorrectly. However, if the argument force is set to True, it will skip concepts that are erroneously defined and generate the ontology with what is availble.
To run the tool directly
cd tool # Since the excel file provides a relative path to an imported ontology\nexcel2onto onto.xlsx # This will fail\nexcel2onto --force onto.xlsx\n
We suggest developing your excelsheet without fails as once it starts getting big it is difficult to see what is wrong or correct. It is also possible to generate the ontology in python. Look at the script make_onto.py for an example.
That should be it. Good luck!
"}]} \ No newline at end of file diff --git a/latest/sitemap.xml b/latest/sitemap.xml index 33423ba51..205f77495 100644 --- a/latest/sitemap.xml +++ b/latest/sitemap.xml @@ -2,177 +2,177 @@