diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index bf4771b676..4b7f520497 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -271,7 +271,7 @@ jobs: - name: "Upload Documentation Build log" uses: actions/upload-artifact@v3 with: - name: doc-${{inputs.PACKAGE_NAME}}-log + name: doc-${{env.PACKAGE_NAME}}-log path: docs/*.txt if: always() @@ -379,7 +379,6 @@ jobs: retro: name: "Retro-compatibility" - if: startsWith(github.head_ref, 'master') || contains(github.head_ref, 'release') || contains(github.head_ref, 'retro') || startsWith(github.ref, 'refs/tags/v') runs-on: ${{ matrix.os }} strategy: fail-fast: false diff --git a/docs/source/_static/simple_example.rst b/docs/source/_static/simple_example.rst index 5ddcedbecc..0856880be2 100644 --- a/docs/source/_static/simple_example.rst +++ b/docs/source/_static/simple_example.rst @@ -1,4 +1,4 @@ -Here's how you would open a result file generated by MAPDL (or another ANSYS solver) and +Here's how you would open a result file generated by Ansys MAPDL (or another Ansys solver) and extract results: .. code-block:: default diff --git a/docs/source/api/index.rst b/docs/source/api/index.rst index 36443d5c95..2591e619ba 100644 --- a/docs/source/api/index.rst +++ b/docs/source/api/index.rst @@ -1,10 +1,10 @@ .. _ref_api_section: -================ -APIs -================ +============= +API reference +============= .. toctree:: - :maxdepth: 1 - :caption: APIs + :maxdepth: 2 + :caption: API reference ansys.dpf.core diff --git a/docs/source/contributing.rst b/docs/source/contributing.rst index 63eada455c..8e521e73f7 100644 --- a/docs/source/contributing.rst +++ b/docs/source/contributing.rst @@ -35,4 +35,4 @@ To reach the PyAnsys support team, email `pyansys.support@ansys.com `_. +`PyDPF-Core Documentation `_. diff --git a/docs/source/getting_started/compatibility.rst b/docs/source/getting_started/compatibility.rst index 6757c0a37c..dec19a4034 100644 --- a/docs/source/getting_started/compatibility.rst +++ b/docs/source/getting_started/compatibility.rst @@ -29,11 +29,11 @@ should also be synchronized with the server version. :widths: 20 20 20 20 20 :header-rows: 1 - * - Ans.Dpf.Grpc.exe server version - - ansys.dpf.gatebin binaries Python package version - - ansys.dpf.gate Python package version - - ansys.grpc.dpf Python package version - - ansys.dpf.core Python package version + * - ``Ans.Dpf.Grpc.exe`` server version + - ``ansys.dpf.gatebin`` binaries Python module version + - ``ansys.dpf.gate`` Python module version + - ``ansys.grpc.dpf`` Python module version + - ``ansys.dpf.core`` Python module version * - 5.0 (Ansys 2023 R1) - 0.2.0 and later - 0.2.0 and later @@ -67,6 +67,6 @@ Environment variable The ``start_local_server`` method uses the ``Ans.Dpf.Grpc.bat`` file or ``Ans.Dpf.Grpc.sh`` file to start the server. Ensure that the ``AWP_ROOT{VER}`` environment variable is set to your installed Ansys version. For example, if Ansys -2022 installation is installed, ensure that the ``AWP_ROOT222`` environment +2022 R2 is installed, ensure that the ``AWP_ROOT222`` environment variable is set to the path for this Ansys installation. diff --git a/docs/source/getting_started/dependencies.rst b/docs/source/getting_started/dependencies.rst index 1882ee4b82..f40706fa52 100644 --- a/docs/source/getting_started/dependencies.rst +++ b/docs/source/getting_started/dependencies.rst @@ -12,10 +12,12 @@ installed. Package dependencies follow: - `ansys.dpf.gate `_, which is the gate to the DPF C API or Python gRPC API. The gate depends on the server configuration: + - `ansys.grpc.dpf `_ is the gRPC code generated from protobuf files. - `ansys.dpf.gatebin `_ is the operating system-specific binaries with DPF C APIs. + - `psutil `_ - `tqdm `_ - `packaging `_ diff --git a/docs/source/getting_started/docker.rst b/docs/source/getting_started/docker.rst index 0e21eaef81..65b575a29b 100644 --- a/docs/source/getting_started/docker.rst +++ b/docs/source/getting_started/docker.rst @@ -52,7 +52,7 @@ Install the DPF image Note that the preceding command shares the current directory to the ``/dpf`` -directory contained within the image. This is necessary as the DPF +directory contained within the image. This is necessary as the DPF binary within the image must access the files within the image itself. Any files that you want to have DPF read must be placed in ``pwd``. You can map other directories as needed, but these diff --git a/docs/source/getting_started/index.rst b/docs/source/getting_started/index.rst index 68d74b4039..e435a09e80 100755 --- a/docs/source/getting_started/index.rst +++ b/docs/source/getting_started/index.rst @@ -10,9 +10,8 @@ Ansys 2021 R1 or later. For more information on getting a licensed copy of Ansys visit the `Ansys website `_. .. toctree:: - :hidden: - :maxdepth: 2 - + :maxdepth: 3 + compatibility install dependencies diff --git a/docs/source/getting_started/install.rst b/docs/source/getting_started/install.rst index 5e1b0f454f..0046bddcfe 100644 --- a/docs/source/getting_started/install.rst +++ b/docs/source/getting_started/install.rst @@ -4,92 +4,53 @@ Installation ************ -PIP installation ----------------- +Install using ``pip`` +--------------------- -To use PyDPF-Core with Ansys 2021 R2 or later, install the latest version -of PyDPF-Core with: +`pip `_ is the package installer for Python. + +To use PyDPF-Core with Ansys 2021 R2 or later, install teh latest version +with: .. code:: pip install ansys-dpf-core -To use PyDPF-Core with Ansys 2021 R1, install a 0.2.* PyDPF-Core version with: +To use PyDPF-Core with Ansys 2021 R1, install the latest version +with: .. code:: pip install ansys-dpf-core<0.3.0 -Wheel file installation ------------------------ -If you are unable to install PyDPF-Core on the host machine due to -network isolation, download the latest wheel file or the wheel file -for a specific release from `PyDPF-Core -GitHub `_ or -`PyDPF-Core PyPi `_. - -Tryout installation -------------------- - -For a quick tryout installation, use: - -.. code-block:: default - - from ansys.dpf.core import Model - from ansys.dpf.core import examples - model = Model(examples.simple_bar) - print(model) +Install using a wheel file +-------------------------- +If you are unable to install PyDPF-Post on the host machine due to +network isolation, download the latest wheel file from `PyDPF-Post +GitHub `_ or +`PyDPF-Post PyPi `_. +Install for a quick tryout +-------------------------- -.. rst-class:: sphx-glr-script-out +For a quick tryou, use: - Out: - - .. code-block:: none +.. code:: - DPF Model - ------------------------------ - Static analysis - Unit system: Metric (m, kg, N, s, V, A) - Physics Type: Mechanical - Available results: - - displacement: Nodal Displacement - - element_nodal_forces: ElementalNodal Element nodal Forces - - elemental_volume: Elemental Volume - - stiffness_matrix_energy: Elemental Energy-stiffness matrix - - artificial_hourglass_energy: Elemental Hourglass Energy - - thermal_dissipation_energy: Elemental thermal dissipation energy - - kinetic_energy: Elemental Kinetic Energy - - co_energy: Elemental co-energy - - incremental_energy: Elemental incremental energy - - structural_temperature: ElementalNodal Temperature - ------------------------------ - DPF Meshed Region: - 3751 nodes - 3000 elements - Unit: m - With solid (3D) elements - ------------------------------ - DPF Time/Freq Support: - Number of sets: 1 - Cumulative Time (s) LoadStep Substep - 1 1.000000 1 1 - + from ansys.dpf.core import Model + from ansys.dpf.core import examples + model = Model(examples.simple_bar) + print(model) -Development mode installation ------------------------------ +Install in development mode +--------------------------- If you want to edit and potentially contribute to PyDPF-Core, -clone the repository and install it using pip with the ``-e`` +clone the repository and install it using ``pip`` with the ``-e`` development flag: -.. code:: - - git clone https://github.com/pyansys/pydpf-core - cd pydpf-core - pip install -e . - +.. include:: ../pydpf-post_clone_install.rst diff --git a/docs/source/images/drawings/Workflow2.png b/docs/source/images/drawings/Workflow2.png index cd90813a63..c9e2caea49 100644 Binary files a/docs/source/images/drawings/Workflow2.png and b/docs/source/images/drawings/Workflow2.png differ diff --git a/docs/source/index.rst b/docs/source/index.rst index 9cf99227c9..29a9db74c7 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -2,14 +2,14 @@ PyDPF-Core ========== -The Data Processing Framework (**DPF**) provides numerical simulation +The Data Processing Framework (DPF) provides numerical simulation users and engineers with a toolbox for accessing and transforming simulation data. With DPF, you can perform complex preprocessing or postprocessing of large amounts of simulation data within a simulation workflow. DPF is an independent, physics-agnostic tool that you can plug into many apps for both data input and data output, including visualization and -result plots.It can access data from solver result files and other neutral +result plots. It can access data from solver result files and other neutral formats, such as CSV, HDF5, and VTK files. Using the many DPF operators that are available, you can manipulate and @@ -23,53 +23,66 @@ a modular and easy-to-use tool with a large range of capabilities. .. image:: images/drawings/dpf-flow.png :width: 670 - :alt: DPF FLow - + :alt: DPF flow + The ``ansys.dpf.core`` package provides a Python interface to DPF, enabling rapid postprocessing of a variety of Ansys file formats and physics solutions without ever leaving the Python environment. + Brief demo ~~~~~~~~~~ +Here is how you open a result file generated by MAPDL (or another ANSYS solver) +and extract results: + +.. code:: python + + >>> from ansys.dpf.core import Model + >>> from ansys.dpf.core import examples + >>> model = Model(examples.simple_bar) + >>> print(model) + -.. include:: _static/simple_example.rst +Here is how you plot displacement results: -For comprehesive demos, see :ref:`gallery`. +.. code:: python + + >>> disp = model.results.displacement().X() + model.metadata.meshed_region.plot(disp.outputs.fields_container()) + +For comprehensive demos, see :ref:`gallery`. Key features ~~~~~~~~~~~~ -**Computation efficiency** +**Computational efficiency** -DPF is a modern framework based on new hardware architectures. -Thanks to continued development, new capabilities are frequently added. +DPF is a modern framework based on new hardware architectures. Thanks +to continued development, new capabilities are frequently added. **Generic interface** -DPF is physics-agnostic, which means that its use is not limited to a -particular field, physics solution, or file format. +DPF is physics-agnostic, which means that its use is not limited to a particular +field, physics solution, or file format. **Extensibility and customization** - -DPF is developed around two core entities: +DPF is developed around two core entities: - Data represented as a *field* -- An *operator* to act upon this data +- An *operator* to act upon this data -Each DPF capability is developed through operators that allow for -componentization of the framework. Because DPF is plugin-based, new -features or formats can be easily added. +Each DPF capability is developed through operators that allow for componentization +of the framework. Because DPF is plugin-based, new features or formats can be easily added. .. toctree:: - :hidden: :maxdepth: 2 - + :caption: Getting Started + :hidden: getting_started/index user_guide/index api/index - operator_reference examples/index contributing diff --git a/docs/source/operator_reference.rst b/docs/source/operator_reference.rst index 5290ebf48d..04558cd5ca 100644 --- a/docs/source/operator_reference.rst +++ b/docs/source/operator_reference.rst @@ -4,7 +4,7 @@ Operators ========= -Loading operators... +Loading operators. .. raw:: html diff --git a/docs/source/user_guide/concepts.rst b/docs/source/user_guide/concepts.rst index 0cc50b419f..e27ee7dfca 100644 --- a/docs/source/user_guide/concepts.rst +++ b/docs/source/user_guide/concepts.rst @@ -1,70 +1,96 @@ .. _user_guide_concepts: -========================= - Concepts and Terminology -========================= -DPF sees ``fields of data``, not physical results, making it a very versatile -tool that can be used in a variety of ways across teams, projects, -and simulations. - -The :ref:`data source` is one or more files in which analysis results -can be found. - -A :ref:`field` is the main simulation data container. -For transient/harmonic/modal or multi-step static analyses, -a :ref:`field container` is used to hold a set of fields -(one field for each time step, each frequency). - -The physical entity with which the field is associated is called -the ``support``. For example, the support can be a mesh, -geometrical entity, or :ref:`time or frequency values`. - -In most cases you will not want to work with the entire set of data, -but rather a subset of that data. To achieve this you define :ref:`scoping`. -Scoping is a subset of the model’s support. Typically, scoping can -represent node IDs, element IDs, time steps, frequencies, joints, and so on. -Scoping describes a spatial and/or temporal subset on which the field is scoped. - -In DPF, field data is always associated with its scoping and support, making -the field a self-describing piece of data. For example, in a field of nodal -displacement, the *displacement* is the simulation data and the associated -*nodes* are the scoping. A field can also be defined by its dimensionality, -unit of data, and location. - -The ``location`` is the type of topology associated with the data container. -DPF uses three different spatial locations for finite element data: Nodal, -Elemental, and ElementalNodal. A ``Nodal`` location describes data computed -on the nodes, while an ``Elemental`` location describes data computed on the -element itself. These :ref:`nodes` and :ref:`elements` are identified by an ID — typically -a node or element number. An ``ElementalNodal`` location describes data -defined on the nodes of the elements, but you must use the Element ID to -retrieve it. To achieve this you define ``Elemental scoping`` or ``Nodal scoping``. - -The following example summarizes the concepts above: +================== +Terms and concepts +================== +DPF sees *fields of data*, not physical results. This makes DPF a +very versatile tool that can be used across teams, projects, and +simulations. + +Key terms +--------- +Here are descriptions for key DPF terms: + +- **Data source:** One or more files containing analysis results. +- **Field:** Main simulation data container. +- **Field container:** For a transient, harmonic, modal, or multi-step + static analysis, a set of fields, with one field for each time step + or frequency. +- **Location:** Type of topology associated with the data container. DPF + uses three different spatial locations for finite element data: ``Nodal``, + ``Elemental``, and ``ElementalNodal``. +- **Operators:** Objects that are used to create and transform the data. + An operator is composed of a *core* and *pins*. The core handles the + calculation, and the pins provide input data to and output data from + the operator. +- **Scoping:** Spatial and/or temporal subset of a model's support. +- **Support:** Physical entity that the field is associated with. For example, + the support can be a mesh, geometrical entity, or time or frequency values. +- **Workflow:** Global entity that is used to evaluate the data produced + by chained operators. + +Scoping +------- +In most cases, you do not want to work with an entire set of data +but rather with a subset of this data. To achieve this, you define +a *scoping*, which is a subset of the model's support. +Typically, scoping can represent node IDs, element IDs, time steps, +frequencies, and joints. Scoping describes a spatial and/or temporal +subset that the field is scoped on. + +Field data +---------- +In DPF, field data is always associated with its scoping and support, making +the *field* a self-describing piece of data. For example, in a field of nodal +displacement, the *displacement* is the simulation data, and the associated +*nodes* are the scoping. A field can also be defined by its dimensionality, +unit of data, and *location*. + +Location +-------- +The location is the type of topology associated with the data container. For +finite element data, the location is one of three spatial locations: ``Nodal``, +``Elemental``, or ``ElementalNodal``. + +- A ``Nodal`` location describes data computed on the nodes. A node is identified + by an ID, which is typically a node number. +- An ``Elemental`` location describes data computed on the element itself. An element + is identified by an ID, which is typically an element number. +- An ``ElementalNodal`` location describes data defined on the nodes of the elements. + To retrieve an elemental node, you must use the ID for the element. To achieve + this, you define an *elemental scoping* or *nodal scoping*. + +Concept summary +--------------- +This image summarizes the preceding concepts: .. image:: ../images/drawings/field-breakdown.png -:ref:`Operators` are the only object used to create and transform the data. -An operator is composed of a “core” that handles the calculation, and input -and output “pins” (think of an integrated circuit in electronics). These pins -enable you to provide input data to each operator. When the operator is -evaluated, it processes the input information to compute its output. + +Operators +--------- +You use :ref:`ref_dpf_operators_reference` to create and transform the data. An +*operator* is composed of a core and input and output pins. + +- The core handles the calculation. +- The input and output pins, like those in an integrated circuit in electronics, + submit data to the operator and output the computed result from the operator. .. image:: ../images/drawings/OperatorPins.png -Operators can be chained together to create :ref:`workflows`. A workflow is a -global entity that you use to evaluate the data produced by the operators. -It needs input information and computes the requested output information. +Workflows +--------- +You can chain operators together to create a *workflow*, which is a global entity +that you use to evaluate data produced by operators. A workflow requires inputs +to operators, which computes requested outputs. Think of a workflow as a black box in which some operators are chained, -computing the information for which it is made: +computing the information for which the workflow is made: .. image:: ../images/drawings/Workflow1.png -The following example shows operators that have been chained together -to create a total deformation workflow: +The following image shows operators that have been chained together to create a +total deformation workflow. You can use this workflow in any simulation +workflow with any data sources as inputs. .. image:: ../images/drawings/Workflow2.png - -This workflow can be used in any simulation workflow using any -data source as input. \ No newline at end of file diff --git a/docs/source/user_guide/custom_operators.rst b/docs/source/user_guide/custom_operators.rst index c30ff99a55..88125b3770 100644 --- a/docs/source/user_guide/custom_operators.rst +++ b/docs/source/user_guide/custom_operators.rst @@ -1,182 +1,227 @@ .. _user_guide_custom_operators: ================ -Custom Operators +Custom operators ================ -Starting with Ansys 2022 R2, DPF offers the capability to create user-defined Operators in CPython. -Writing Operators allows to wrap python routines in a DPF compliant way so that it can be accessed -the same way as a native :class:`ansys.dpf.core.dpf_operator.Operator` in pyDPF or in any supported -client API. -With this feature, DPF can be used as a development tool offering: +In Ansys 2022 R2 and later, you can create custom operators in CPython. Creating custom operators +consists of wrapping Python routines in a DPF-compliant way so that you can them in the same way +as you access the native operators in the :class:`ansys.dpf.core.dpf_operator.Operator` class in +PyDPF-Core or in any supported client API. -- Accessibility: a single script defines an Operator and its documentation. +With support for custom operators, PyDPF-Core becomes a development tool offering: -- Componentization: Operators with similar applications can be grouped in python packages named ``Plugins``. +- **Accessibility:** A simple script can define a basic operator plugin. -- Easy Distribution: standard python tools can be used to package, upload and download the user-defined operators. +- **Componentization:** Operators with similar applications can be grouped in Python plug-in packages. -- Dependencies management: third party python modules can be added to the python package. +- **Easy distribution:** Standard Python tools can be used to package, upload, and download custom operators. -- Reusability: a documented and packaged Operator can be reused in an infinite number of workflows. +- **Dependency management:** Third-party Python modules can be added to the Python package. -- Remotable and parallel computing: native DPF's capabilities are inherited by the user-defined Operators. +- **Reusability:** A documented and packaged operator can be reused in an infinite number of workflows. +- **Remotable and parallel computing:** Native DPF capabilities are inherited by custom operators. -A prerequisite to writing user-defined Operators is to be comfortable with the concept of Operator (:ref:`ref_user_guide_operators`). +The only prerequisite for creating custom operators is to be familiar with native operators. +For more information, see (:ref:`ref_user_guide_operators`). +Install module +-------------- -Installation ------------- +Once an Ansys-unified installation is complete, you must install the ``ansys-dpf-core`` module in the Ansys +installer's Python interpreter. -Once Ansys unified installation completed, ansys-dpf-core module needs to be installed in the Ansys installer's Python -interpreter. Run this :download:`powershell script ` for windows -or this :download:`shell script ` for linux with the optional -arguments: +#. Download the script for you operating system: -- -awp_root : path to Ansys root installation path (usually ending with Ansys Inc/v222), defaults to environment variable AWP_ROOT222 -- -pip_args : optional arguments that add to pip command (ie. --extra-index-url, --trusted-host,...) + - For Windows, download this :download:`powershell script `. + - For Linux, download this :download:`shell script ` -If you wish to uninstall ansys-dpf-core module of the Ansys installation, run this :download:`powershell script ` for windows -or this :download:`shell script ` for linux with the optional -argument: +#. Run the downloaded script for installing with optional arguments: -- -awp_root : path to Ansys root installation path (usually ending with Ansys Inc/v222), defaults to environment variable AWP_ROOT222 + - ``-awp_root``: Path to the Ansys root installation folder. For example, the 2022 R2 installation folder ends + with ``Ansys Inc/v222``, and the default environment variable is ``AWP_ROOT222``. + - ``-pip_args``: Optional arguments to add to the ``pip`` command. For example, ``--extra-index-url`` or + ``--trusted-host``. +If you ever want to uninstall the ``ansys-dpf-core`` module from the Ansys installation, you can do so. -Writing the Operator --------------------- +#. Download the script for your operating system: + + - For Windows, download this :download:`powershell script `. + - For Linux, download this :download:`shell script `. + +3. Run the downloaded script for uninstalling with the optional argument: + + - ``-awp_root``: Path to the Ansys root installation folder. For example, the 2022 R2 installation folder ends + with ``Ansys Inc/v222``, and the default environment variable is ``AWP_ROOT222``. -Basic Implementation -~~~~~~~~~~~~~~~~~~~~ -To write the simplest DPF python plugins, a single python script is necessary. -An Operator implementation deriving from :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` -and a call to :py:func:`ansys.dpf.core.custom_operator.record_operator` are the 2 necessary steps to create a plugin. +Create operators +---------------- +You can create a basic operator plugin or a plug-in package with multiple operators. +Basic operator plugin +~~~~~~~~~~~~~~~~~~~~~ +To create a basic operator plugin, you write a simple Python script. An operator implementation +derives from the :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` class and a call to +the :func:`ansys.dpf.core.custom_operator.record_operator` method. + +This example script shows how you create a basic operator plugin: .. literalinclude:: custom_operator_example.py + .. code-block:: def load_operators(*args): record_operator(CustomOperator, *args) -Input and output pins descriptions take a dictionary mapping pin numbers to their -:class:`ansys.dpf.core.operator_specification.PinSpecification`. PinSpecification takes a name (used in the documentation, -and in the code generation), a list of supported types, a document, whether the pin is optional and/or ellipsis (meaning -the pin specification is valid for pins going from pin number to infinity). -:class:`ansys.dpf.core.operator_specification.SpecificationProperties` allows to specify other properties of the -Operator like its user name (mandatory) or its category (used in the documentation, -and in the code generation). +In the various properties for the class, you specify the following: -See example of Custom Operators implementations in the Examples section :ref:`python_operators`. +- Name for the custom operator +- Description of what the operator does +- Dictionary for each input and output pin, which includes the name, a list of supported types, a description, + and whether it is optional and/or ellipsis (meaning that the specification is valid for pins going from pin + number *x* to infinity) +- List for operator properties, including name to use in the documentation and code generation and the + operator category +For comprehensive examples on writing operator plugins, see :ref:`python_operators`. -Package Custom Operators -~~~~~~~~~~~~~~~~~~~~~~~~ -To create a DPF plugin with several Operators or with complex routines, python packages of Operators can be created. -The benefits of writing packages instead of simple scripts are: +Plug-in package with multiple operators +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +To create a plug-in package with multiple operators or with complex routines, you write a +Python package. The benefits of writing packages rather than simple scripts are: -- componentization (split the code in several python modules or files). -- distribution (with packages, standard python tools can be used to upload and download packages). -- documentation (READMEs, docs, tests and examples can be added to the package). +- **Componentization:** You can split the code into several Python modules or files. +- **Distribution:** You can use standard Python tools to upload and download packages. +- **Documentation:** You can add README files, documentation, tests, and examples to the package. -A plugin as a package can be a folder with a structure like: +A plug-in package with dependencies consists of a folder with the necessary files. Assume +that the name of your plug-in package is ``custom_plugin``. A folder with this name would +contain four files: +- ``__init__.py`` +- ``operators.py`` +- ``operators_loader.py`` +- ``common.py`` -.. card:: custom_plugin +**__init__.py file** - .. dropdown:: __init__.py +The ``__init__.py`` file contains this code:: - .. code-block:: default + from operators_loader import load_operators - from operators_loader import load_operators + +**operators.py file** - .. dropdown:: operators.py +The ``operators.py`` file contains code like this: - .. literalinclude:: custom_operator_example.py +.. literalinclude:: custom_operator_example.py - .. dropdown:: operators_loader.py - .. code-block:: default +**operators_loader.py file** - from custom_plugin import operators - from ansys.dpf.core.custom_operator import record_operator +The ``operators_loader.py`` file contains code like this:: + from custom_plugin import operators + from ansys.dpf.core.custom_operator import record_operator - def load_operators(*args): - record_operator(operators.CustomOperator, *args) - .. dropdown:: common.py + def load_operators(*args): + record_operator(operators.CustomOperator, *args) + - .. code-block:: default +**common.py file** - #write needed python routines as classes and functions here. +The ``common.py`` file contains the Python routines as classes and functions:: -Add Third Party Dependencies -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + #write needed python routines as classes and functions here. + +Third-party dependencies +^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: custom_operators_deps.rst -A plugin as a package with dependencies can be a folder with a structure like: -.. card:: plugins +Assume once again that the name of your plug-in package is ``custom_plugin``. +A folder with this name would contain these files: - .. card:: custom_plugin +- ``__init__.py`` +- ``operators.py`` +- ``operators_loader.py`` +- ``common.py`` +- ``winx64.zip`` +- ``linx64.zip`` +- ``custom_plugin.xml`` - .. dropdown:: __init__.py +**__init__.py file** - .. code-block:: default +The ``__init__.py`` file contains this code:: - from operators_loader import load_operators + from operators_loader import load_operators - .. dropdown:: operators.py - .. literalinclude:: custom_operator_example.py +**operators.py file** - .. dropdown:: operators_loader.py +The ``operators.py`` file contains code like this: - .. code-block:: default +.. literalinclude:: custom_operator_example.py - from custom_plugin import operators - from ansys.dpf.core.custom_operator import record_operator +**operators_loader.py file** - def load_operators(*args): +The ``operators_loader.py`` file contains code like this:: + + from custom_plugin import operators + from ansys.dpf.core.custom_operator import record_operator + + + def load_operators(*args): + record_operator(operators.CustomOperator, *args) + + + def load_operators(*args): record_operator(operators.CustomOperator, *args) - .. dropdown:: common.py +**common.py file** - .. code-block:: default +The ``common.py`` file contains the Python routines as classes and functions:: - #write needed python routines as classes and functions here. + #write needed python routines as classes and functions here. - .. dropdown:: requirements.txt - .. literalinclude:: /examples/07-python-operators/plugins/gltf_plugin/requirements.txt +**requirements.txt file** - .. dropdown:: assets +The ``requirements.txt`` file contains code like this: + +.. literalinclude:: /examples/07-python-operators/plugins/gltf_plugin/requirements.txt - - winx64.zip - - linx64.zip +The ZIP files for Windows and Linux are included as assets: + +- winx64.zip +- linx64.zip - .. dropdown:: custom_plugin.xml - .. literalinclude:: custom_plugin.xml - :language: xml +**custom_plugin.xml file** +The ``custom_plugin.xml`` file contains code like this: + +.. literalinclude:: custom_plugin.xml + :language: xml -Use the Custom Operators ------------------------- -Once a python plugin is written, it can be loaded with the function :func:`ansys.dpf.core.core.load_library` -taking as first argument the path to the directory of the plugin, as second argument ``py_`` + any name identifying the plugin, -and as last argument the function’s name used to record operators. +Use custom operators +-------------------- + +Once a custom operator is created, you can use the :func:`ansys.dpf.core.core.load_library` method to load it. +The first argument is the path to the directory with the plugin. The second argument is ``py_`` plus any name +identifying the plugin. The last argument is the function name for recording operators. -If a single script has been used to create the plugin, then the second argument should be ``py_`` + name of the python file: +For a plugin that is a single script, the second argument should be ``py_`` plus the name of the Python file: .. code:: @@ -185,7 +230,7 @@ If a single script has been used to create the plugin, then the second argument "py_custom_plugin", #if the load_operators function is defined in path/to/plugins/custom_plugin.py "load_operators") -If a python package was written, then the second argument should be ``_py`` + any name: +For a plug-in package, the second argument should be ``_py`` plus any name: .. code:: @@ -194,14 +239,12 @@ If a python package was written, then the second argument should be ``_py`` + an "py_my_custom_plugin", #if the load_operators function is defined in path/to/plugins/custom_plugin/__init__.py "load_operators") -Once the plugin loaded, the Operator can be instantiated with: +Once the plugin is loaded, you can instantiate the custom operator: .. code:: new_operator = dpf.Operator("custom_operator") # if "custom_operator" is what is returned by the ``name`` property - - References ---------- See the API reference at :ref:`ref_custom_operator` and examples of Custom Operators implementations in :ref:`python_operators`. diff --git a/docs/source/user_guide/custom_operators_deps.rst b/docs/source/user_guide/custom_operators_deps.rst index dde916ad55..91befdc52c 100644 --- a/docs/source/user_guide/custom_operators_deps.rst +++ b/docs/source/user_guide/custom_operators_deps.rst @@ -1,32 +1,53 @@ -To add third party modules as dependencies to a custom DPF python plugin, a folder or zip file -with the sites of the dependencies needs to be created and referenced in an xml located next to the plugin's folder -and having the same name as the plugin plus the ``.xml`` extension. The ``site`` python module is used by DPF when -calling :py:func:`ansys.dpf.core.core.load_library` function to add these custom sites to the python interpreter path. -To create these custom sites, the requirements of the custom plugin should be installed in a python virtual -environment, the site-packages (with unnecessary folders removed) should be zipped and put with the plugin. The -path to this zip should be referenced in the xml as done above. +To add third-party modules as dependencies to a plug-in package, you should create +and reference a folder or ZIP file with the sites of the dependencies in an XML file +located next to the folder for the plug-in package. The XML file must have the same +name as the plug-in package plus an ``.xml`` extension. -To simplify this step, a requirements file can be added in the plugin, like: +When the :py:func:`ansys.dpf.core.core.load_library` method is called, PDF-Core uses the +``site`` Python module to add custom sites to the path for the Python interpreter. -.. dropdown:: requirements.txt - .. literalinclude:: /examples/07-python-operators/plugins/gltf_plugin/requirements.txt +To create these custom sites, do the following: -And this :download:`powershell script ` for windows or -this :download:`shell script ` can be ran with the mandatory arguments: +#. Install the requirements of the plug-in package in a Python virtual environment. +#. Remove unnecessary folders from the site packages and compress them to a ZIP file. +#. Place the ZIP file in the plug-in package. +#. Reference the path to the ZIP file in the XML file as indicated above. -- -pluginpath : path to the folder of the plugin. -- -zippath : output zip file name. +To simplify this step, you can add a requirements file in the plug-in package: -optional arguments are: +.. literalinclude:: /examples/07-python-operators/plugins/gltf_plugin/requirements.txt -- -pythonexe : path to a python executable of your choice. -- -tempfolder : path to a temporary folder to work on, default is the environment variable ``TEMP`` on Windows and /tmp/ on Linux. -For windows powershell, call:: +For this approach, do the following: - create_sites_for_python_operators.ps1 -pluginpath /path/to/plugin -zippath /path/to/plugin/assets/winx64.zip +#. Download the script for your operating system: -For linux shell, call:: + - For Windows, download this :download:`PowerShell script `. + - For Linux, download this :download:`Shell script `. + +3. Run the downloaded script with the mandatory arguments: + + - ``-pluginpath``: Path to the folder with the plug-in package. + - ``-zippath``: Path and name for ZIP file. + + Optional arguments are: + + - ``-pythonexe``: Path to a Python executable of your choice. + - ``-tempfolder``: Path to a temporary folder to work in. The default is the environment variable + ``TEMP`` on Windows and ``/tmp/`` on Linux. + +#. Run the command for your operating system. + + - From Windows PowerShell, run: + + .. code-block:: + + create_sites_for_python_operators.ps1 -pluginpath /path/to/plugin -zippath /path/to/plugin/assets/winx64.zip + + - From Linux Shell, run: + + .. code-block:: + + create_sites_for_python_operators.sh -pluginpath /path/to/plugin -zippath /path/to/plugin/assets/linx64.zip - create_sites_for_python_operators.sh -pluginpath /path/to/plugin -zippath /path/to/plugin/assets/linx64.zip diff --git a/docs/source/user_guide/dpf_concepts.rst b/docs/source/user_guide/dpf_concepts.rst index 73954d4853..9fa60f9557 100644 --- a/docs/source/user_guide/dpf_concepts.rst +++ b/docs/source/user_guide/dpf_concepts.rst @@ -13,7 +13,7 @@ DPF concepts .. card-carousel:: 2 - .. card:: Concepts and Terminology + .. card:: Concepts and terminology :link: user_guide_concepts :link-type: ref :width: 25% @@ -21,7 +21,7 @@ DPF concepts .. image:: ../images/drawings/book-logo.png - .. card:: Ways of Using DPF + .. card:: Ways of using DPF :link: user_guide_waysofusing :link-type: ref :width: 25% @@ -29,7 +29,7 @@ DPF concepts .. image:: ../images/drawings/using-dpf.png - .. card:: Using DPF: Step by Step + .. card:: Using DPF: Step by step :link: user_guide_stepbystep :link-type: ref :width: 25% diff --git a/docs/source/user_guide/fields_container.rst b/docs/source/user_guide/fields_container.rst index 2f9b208f52..1aca1de66e 100644 --- a/docs/source/user_guide/fields_container.rst +++ b/docs/source/user_guide/fields_container.rst @@ -1,23 +1,23 @@ .. _ref_user_guide_fields_container: =========================== -Fields Container and Fields +Fields container and fields =========================== -Where DPF uses operators to load and operate on data, it uses the -field container and fields to store and return data. In other words, -operators are like verbs, acting on the data, while the field container -and fields are like nouns, objects that hold data. +While DPF uses operators to load and operate on data, it uses field containers +and fields to store and return data. In other words, operators are like verbs, +acting on the data, while field containers and fields are like nouns, objects +that hold data. - -Obtaining the Fields Container or Fields ----------------------------------------- +Access a fields container or field +----------------------------------- The outputs from operators can be either a -:py:class:`ansys.dpf.core.fields_container.FieldsContainer` or a -:py:class:`ansys.dpf.core.field.Field`. A fields container is the DPF -equivalent of a list of fields. It is holds a vector of fields. +:class:`ansys.dpf.core.fields_container.FieldsContainer` class or a +:class:`ansys.dpf.core.field.Field` class. + +A fields container is the DPF equivalent of a list of fields. It holds a +vector of fields. -In this example, the fields container is returned from the -``elastic_strain`` operator: +This example uses the ``elastic_strain`` operator to access a fields container: .. code-block:: @@ -61,12 +61,14 @@ In this example, the fields container is returned from the - field 19 {time: 20} with ElementalNodal location, 6 components and 40 entities. -Accessing Fields Within a Fields Container +Accessing fields within a fields container ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Because the result above contains a transient result, the -fields container has one field by time set. +Many methods are available for accessing a field in a fields +container. The preceding results contain a transient +result, which means that the fields container has one field +by time set. -Access the fields from the fields contains using these methods: +Access the field: .. code-block:: @@ -80,21 +82,21 @@ Access the fields from the fields contains using these methods: 20 -Return a field based on its index: +Access the field based on its index: .. code-block:: field_first_time = fields[0] field_last_time = fields[19] -Return a field based on its time set ID: +Access the field based on its time set ID: .. code-block:: field = fields.get_field_by_time_id(1) -Alternatively, to access fields for more complex requests, use the -``get_field`` method with the identifier of the requested field: +To access fields for more complex requests, you can use the +``get_field`` method with the ID of the requested field: .. code-block:: @@ -113,8 +115,7 @@ Alternatively, to access fields for more complex requests, use the 40 entities Data:6 components and 320 elementary data -Or in a more real-word example: - +Here is a more real-word example: .. code-block:: @@ -146,7 +147,7 @@ Or in a more real-word example: Data:6 components and 37580 elementary data -Reference the available time frequency support to determine which +The following example references the available time frequency support to determine which time complex IDs are available in the fields container: .. code-block:: @@ -186,25 +187,24 @@ time complex IDs are available in the fields container: 19 0.190000 1 19 20 0.200000 1 20 -Note that the time set IDs used are 1 based. When indexing from Pythonic -indexing via ``fields[0]``, you can use zero-based indexing. When requesting the -results using the ``get_fields`` method, the request is based on the time scoping -set IDs. +Note that the time set IDs used are one-based. When indexing from Pythonic +indexing with ``fields[0]``, you can use zero-based indexing. When using +the ``get_fields()`` method to access results, you should base the request on +time-scoping set IDs. Field ----- -The class :py:class:`ansys.dpf.core.field.Field` is the fundamental unit of data within DPF. +The :class:`ansys.dpf.core.field.Field` class is the fundamental unit of data within DPF. It contains the actual data and its metadata, which is results data defined by values associated with entities (scoping). These entities are a subset of a model (support). In DPF, field data is always associated with its scoping and support, making the field -a self-describing piece of data. A field is also defined by its dimensionnality, unit, -location, and more. +a self-describing piece of data. A field is also defined by other attributes, including +dimensionality, unit, and location. .. figure:: ../images/drawings/field.png :scale: 30% - Field Representation You can get an overview of a field's metadata by printing the field: @@ -229,11 +229,11 @@ You can get an overview of a field's metadata by printing the field: The next section provides an overview of the metadata associated with the field itself. -Field Metadata +Field metadata ~~~~~~~~~~~~~~ -The field contains the metadata for the result it is associated with. The metadata +A field contains the metadata for the result it is associated with. The metadata includes the location (such as ``Elemental``, ``Nodal``, or -``ElementalNodal``) and IDs associated with the location. +``ElementalNodal``) and the IDs associated with the location. To access the scoping of the field, use the ``scoping`` attribute: @@ -250,7 +250,7 @@ To access the scoping of the field, use the ``scoping`` attribute: .. code-block:: none - DPF Scoping: + DPF scoping: with Elemental location and 40 entities field.scoping.ids: [21, @@ -265,15 +265,16 @@ To access the scoping of the field, use the ``scoping`` attribute: field.location:'ElementalNodal' -The location ``Elemental`` denotes one value (multiplied by the number of -components) of data per element, while ``Nodal`` is per node, and -``ElementalNodal`` is one value per node per element. For example, -strain is an ``ElementalNodal`` value as the strain is evaluated at -each node for each element. +- The ``Elemental`` location denotes one value of data (multiplied by the number + of components) per element. +- The ``Nodal`` location is one value per node. +- The ``ElementalNodal`` location is one value per node per element. For example, + strain is an ``ElementalNodal`` value because strain is evaluated at each node + for each element. -The field also contains additional metadata such as the ``shape`` of -the data stored, the location of the field, number of components, and -the units of the data: +The field also contains metadata, including the shape of +the data stored, location of the field, number of components, and +units of the data: .. code:: @@ -286,12 +287,12 @@ the units of the data: >>> field.unit - Elemental, elemental nodal, or nodal element "location" of the field + Location of the field (Elemental, ElementalNodal, or Nodal) >>> field.location - Number of components associated with the field. It's expected to - be have a single dimension since there can only be one volume per + Number of components associated with the field. It's expected to + be a single dimension because there can only be one volume per element. >>> field.component_count @@ -309,22 +310,22 @@ the units of the data: 6 -Field Data +Field data ---------- -Accessing Field Data -~~~~~~~~~~~~~~~~~~~~ -When DPF-Core returns the :py:class:`ansys.dpf.core.field.Field` class, +Access field data +~~~~~~~~~~~~~~~~~ +When DPF-Core returns the :class:`ansys.dpf.core.field.Field` class, what Python actually has is a client-side representation of the field, -but not the entirety of the field itself. This means that all the data of +not the entirety of the field itself. This means that all the data of the field is stored within the DPF service. This is important because when building your postprocessing workflows, the most efficient way of interacting with result data is to minimize the exchange of data between Python and DPF, either by using operators or by accessing only the data that is needed. -If you need to access the entire array of data, request -that the data be returned as a ``numpy`` array: +If you need to access the entire array of data, request that the data +be returned as a ``numpy`` array: .. code:: @@ -344,15 +345,14 @@ that the data be returned as a ``numpy`` array: [ 5.56899536e+02, 3.88515320e+02, 1.17119880e+07, -1.68983887e+03, -1.21768023e+05, -2.41346125e+05]]) - This array has 6 components by elementary data (symmetrical tensor XX,YY,ZZ,XY,YZ,XZ) - Note that this array is a genuine, local, numpy array + This array has 6 components by elementary data (symmetrical tensor XX,YY,ZZ,XY,YZ,XZ). + Note that this array is a genuine, local, numpy array. >>> type(array) numpy.ndarray -If you need to request an individual node or element, -request it using either the ``get_entity_data`` or -``get_entity_data_by_id`` methods: +If you need to access an individual node or element, request it +using either the ``get_entity_data()`` or ``get_entity_data_by_id()`` method: .. code:: @@ -360,7 +360,7 @@ request it using either the ``get_entity_data`` or >>> field.get_entity_data(0) - Get the data for the element with the ID 10 + Get the data for the element with ID 10. >>> field.get_entity_data_by_id(10) array([[ 4.99232031e+04, 1.93570602e+02, -3.08514075e+06, @@ -381,16 +381,16 @@ request it using either the ``get_entity_data`` or -1.17046619e+03, -6.76924219e+04, -1.34773391e+05]]) Note that this would correspond to an index of 29 within the - field. Be aware that scoping IDs are not sequential. The index - of Element 29 in the field can be obtained by: + field. Be aware that scoping IDs are not sequential. You would + get the index of element 29 in the field with: >>> field.scoping.ids.index(10) 29 - Here the data of element of id 10 is made of 8 symmetrical tensor, indeed - the elastic strain has one tensor value by node by element (``ElementalNodal`` location) + Here the data for the element with ID 10 is made of 8 symmetrical tensors. + The elastic strain has one tensor value by node by element (ElementalNodal location) - For a displacement on node 3, we have : + To get the displacement on node 3, you would use: >>> disp = model.results.displacement.eval()[0] >>> disp.get_entity_data_by_id(3) array([[8.06571808e-14, 4.03580652e-04, 2.61804706e-05]]) @@ -400,7 +400,6 @@ request it using either the ``get_entity_data`` or While these methods are acceptable when requesting data for a few elements or nodes, they should not be used when looping over the entire array. For efficiency, a field's data can be recovered locally before sending a large number of requests: -: .. code-block:: @@ -409,15 +408,15 @@ a field's data can be recovered locally before sending a large number of request f.get_entity_data_by_id(i) -Operating on Field Data -~~~~~~~~~~~~~~~~~~~~~~~ -Often times, it's not necessary to directly act upon the data of an -array within Python. For example, if you want to know the maximum of -the data, you could potentially compute the maximum of the array from -``numpy`` with ``array.max()``. However, that requires sending the entire -array to Python and then computing the maximum there. Rather than -copying the array over and then computing the maximum in Python, you -can instead compute the maximum directly from the field itself. +Operate on field data +~~~~~~~~~~~~~~~~~~~~~ +Oftentimes, you do not need to directly act on the data of an array within +Python. For example, if you want to know the maximum of the data, you can +use the ``array.max()`` method to compute the maximum of the array with the +``numpy`` package. However, this requires sending the entire array to Python +and then computing the maximum there. Rather than copying the array over and +computing the maximum in Python, you can instead compute the maximum directly +from the field itself. This example uses the ``min_max`` operator to compute the maximum of the field while returning the field: @@ -425,7 +424,7 @@ the field while returning the field: .. code:: Compute the maximum of the field within DPF and return the result - a numpy array + in a numpy array >>> max_field = field.max() >>> max_field.data @@ -460,9 +459,9 @@ average of a field: -7.59655688e+04 0.00000000e+00]] Elemental -For more advanced information on operator chaining, see the :ref:`ref_user_guide_operators`. +For comprehensive information on chaining operators, see :ref:`ref_user_guide_operators`. -API Reference +API reference ~~~~~~~~~~~~~ See the API reference at :ref:`ref_fields_container` and :ref:`ref_field`. diff --git a/docs/source/user_guide/index.rst b/docs/source/user_guide/index.rst index f781b0de28..6597a7fb49 100644 --- a/docs/source/user_guide/index.rst +++ b/docs/source/user_guide/index.rst @@ -10,13 +10,12 @@ computation, customization, and remote postprocessing accessible in Python. This section has the following goals: - - Describe basic DPF concepts, including terminology - - - Describe the most-used DPF entities and how they can help you to access and modify solver data - + - Describe basic DPF concepts, including terminology. + - Describe the most-used DPF entities and how they can help you to access and modify solver data. - Provide simple how-tos for tackling most common use cases. -Other sections include :ref:`ref_api_section` and :ref:`sphx_glr_examples`. +Other sections of this guide include :ref:`ref_api_section`, :ref:`ref_dpf_operators_reference`, +and :ref:`sphx_glr_examples`. .. include:: dpf_concepts.rst diff --git a/docs/source/user_guide/model.rst b/docs/source/user_guide/model.rst index ad7ddfa230..4ea420e30c 100644 --- a/docs/source/user_guide/model.rst +++ b/docs/source/user_guide/model.rst @@ -1,16 +1,16 @@ .. _user_guide_model: ========= -DPF Model +DPF model ========= The DPF model provides the starting point for opening a result file. -From here you can connect various operators and display results +From the ``Model`` object, you can connect various operators and display results and data. -To create a ``Model`` instance, import ``dpf`` and load a file. The -path provided must be an absolute path or a path relative to the DPF -server. +To create an instance of the ``Model`` object, import the ``pydpf-core`` package and +load a result file. The path that you provide must be an absolute path +or a path relative to the DPF server. .. code-block:: default @@ -21,7 +21,7 @@ server. model = dpf.Model(path) To understand what is available in the result file, you can print the model -(or any other instance). +(or any other instance): .. code-block:: default @@ -65,21 +65,21 @@ To understand what is available in the result file, you can print the model -For an example using the model, see :ref:`ref_basic_example`. +For a comprehensive model example, see :ref:`ref_basic_example`. -For a description of the `Model` object, see the APIs section :ref:`ref_model`. +For a description of the ``Model`` object, see :ref:`ref_model`. -Model Metadata +Model metadata -------------- -You can use model metadata to access all information about an analysis: +To access all information about an analysis, you can use model metadata: - Type of analysis - Time or frequency descriptions - Mesh - Available results -For example, you can get the analysis type: +This example shows you get the analysis type: .. code-block:: default @@ -94,7 +94,7 @@ For example, you can get the analysis type: 'static' -You can get information about the mesh: +This example shows how you get mesh information: .. code:: default @@ -118,7 +118,7 @@ You can get information about the mesh: Shape: Solid -You can get time sets: +This example shows how you get time sets: .. code-block:: default @@ -135,15 +135,14 @@ You can get time sets: [1.] -For a description of the `Metadata` object, see the APIs section :ref:`ref_model`. +For a description of the ```Metadata``` object, see :ref:`ref_model`. - -Model Results +Model results ------------- The model contains the ``results`` attribute, which you can use to create operators to access certain results. -To view available results, print them: +This example shows how you view available results: .. code-block:: default @@ -175,9 +174,11 @@ To view available results, print them: .. autoattribute:: ansys.dpf.core.model.Model.results :noindex: -Choosing the time, frequencies, or spatial subset on which to get a given result -is straightforward with the ``results`` attribute: +With the ``results`` attribute, choosing the time, frequencies, or spatial subset +on which to get a given result is straightforward. +This example shows how you get displacement results on all time frequencies on +the mesh scoping: .. code-block:: default @@ -185,13 +186,13 @@ is straightforward with the ``results`` attribute: disp_at_all_times_on_node_1 = disp_result.on_all_time_freqs.on_mesh_scoping([1]) -For an example using the `Result` API, see :ref:`ref_transient_easy_time_scoping`. +For an example using the ``Result`` object, see :ref:`ref_transient_easy_time_scoping`. -For a `description of the `Model` object, see the APIs section :ref:`ref_results`. +For a description of the ``Model`` object, see :ref:`ref_results`. -API Reference +API reference ~~~~~~~~~~~~~ For more information, see :ref:`ref_model` or :ref:`ref_results`. diff --git a/docs/source/user_guide/operators.rst b/docs/source/user_guide/operators.rst index 201b02158c..4a6c70556a 100644 --- a/docs/source/user_guide/operators.rst +++ b/docs/source/user_guide/operators.rst @@ -7,27 +7,29 @@ Operators .. include:: An operator is the only object that is used to create and transform -data. It is the fundamental method by which DPF loads, operates on, and -outputs data. Each operator contains the ``input`` and ``output`` -attributed, which allow you to make various input and output connections. +data. In DPF, you use operators to load, operate on, and output data. -When the operator is evaluated, it processes the input information to -compute its output with respect to its description: +Each operator contains ``input`` and ``output`` attributes, which +allow you to make various input and output connections. + +During an evaluation, an operator processes inputs to +compute an output with respect to the operator's description: .. figure:: ../images/drawings/operator_drawing.svg -By attaching one operator's outputs to another operator's inputs, -you can chain together operators to create workflows for conducting -simple or complex data processing. Through lazy evaluation, DPF -approaches data processing in an efficient manner, evaluating each operator -only when the final operator is evaluated and the data is requested. +You can attach one operator's outputs to another operator's inputs to +chain operators together, thereby creating workflows for conducting simple or +complex data processing. Through lazy evaluation, DPF approaches data processing +in an efficient manner, evaluating each operator only when the final operator +is evaluated and the data is requested. -For example, if you desire the maximum normalized displacement of a +For example, if you want the maximum normalized displacement of a result, you construct operators in this order: .. figure:: ../images/drawings/max_u_norm.png -To achieve this, you an use: +This example shows how to compute the maximum normalized displacement +of a result: .. code-block:: default @@ -47,27 +49,27 @@ transferring any data from DPF to Python until DPF arrives at the solution data that you want. DPF's library of operators is large and includes file readers and mathematical, -geometrical, and logical transformations. This library, found in -:ref:`ref_dpf_operators_reference`, is progressively enhanced. +geometrical, and logical transformations. For more information on this library, +which is progressively enhanced, see :ref:`ref_dpf_operators_reference`. -Creating Operators -~~~~~~~~~~~~~~~~~~ +Create operators +~~~~~~~~~~~~~~~~ Each operator is of type :ref:`ref_operator`. You can create an instance in Python with any of the derived classes available in the package :ref:`ref_operators_package` or directly with the class :ref:`ref_operator` using the internal name string that indicates the operator type. -For more information, see :ref:`ref_dpf_operators_reference`). +For more information, see :ref:`ref_dpf_operators_reference`. -For example, to create the displacement operator, use: +This example shows how to create the displacement operator: .. code-block:: python from ansys.dpf.core import operators as ops op = ops.result.displacement() # or op = ansys.dpf.core.Operator("U") -You can view the description, available inputs, and available outputs of this -particular operator by printing it: +You can view the description and available inputs and available outputs of this +operator by printing it: .. code-block:: python @@ -94,10 +96,10 @@ particular operator by printing it: fields_container [fields_container] -Alternatively, you can instantiate result providers via the model. For more -information, see :ref:`user_guide_model`. +Alternatively, you can instantiate result providers using the ``Model`` object. +For more information, see :ref:`user_guide_model`. -With this model's result usage, file paths for the results are directly +When using this model's result usage, file paths for the results are directly connected to the operator, which means that you can only instantiate available results for your result files: @@ -111,19 +113,20 @@ available results for your result files: displacement = model.results.displacement() -Connecting Operators -~~~~~~~~~~~~~~~~~~~~ +Connect operators +~~~~~~~~~~~~~~~~~ The only required input for the displacement operator is ``data_sources`` (see above). -Providing file paths for results to the operator is necessary to compute -output in the ``fields_container``, which contains the displacement results. +To compute an output in the ``fields_container`` object, which contains the displacement +results, you must provide paths for the result files. +You can create data sources in two ways: -There are two ways of creating data sources: use either the :ref:`ref_model` -class or the :ref:`ref_data_sources` class. +- Use the :ref:`ref_model` class. +- Use the :ref:`ref_data_sources` class. -Because several other examples use the model approach, this example uses the data -sources approach: +Because several other examples use the ``Model`` class, this example uses the +``DataSources``class: .. code-block:: python @@ -144,14 +147,14 @@ sources approach: result key: rst and path: D:\ANSYSDev\dpf-python-core\ansys\dpf\core\examples\model_with_ns.rst Secondary files: -Connect this data source to the displacement operator: +This code shows how to connect the data source to the displacement operator: .. code-block:: python op.inputs.data_sources(data_src) -Other optional inputs can be connected to the displacement operator. -Printing the operator above showed that a ``mesh_scoping`` of type :ref:`ref_scoping` +You can connect other optional inputs to the displacement operator. +The output from printing the operator shows that a ``mesh_scoping`` of type :ref:`ref_scoping` can be connected to work on a spatial subset. A ``time_scoping`` of a list of integers can also be connected to work on a temporal subset: @@ -163,10 +166,10 @@ can also be connected to work on a temporal subset: op.inputs.time_scoping([1]) -Evaluating Operators -~~~~~~~~~~~~~~~~~~~~ -With all the required inputs assigned, the :class:`ansys.dpf.core.fields_container` can be -outputted from the operator: +Evaluate operators +~~~~~~~~~~~~~~~~~~ +With all the required inputs assigned, you can output the :class:`ansys.dpf.core.fields_container`_ +class from the operator: .. code-block:: python @@ -188,7 +191,7 @@ outputted from the operator: - field 0 {time: 1} with Nodal location, 3 components and 2 entities. At run time, the operator checks if all required inputs have been assigned. -Evaluating an operator with missing inputs will raise a ``DPFServerException`` +Evaluating an operator with missing inputs raises a ``DPFServerException`` like this one: .. code-block:: python @@ -205,17 +208,15 @@ like this one: DPFServerException: U<-Data sources are not defined. -For information on using the field container, see :ref:`ref_user_guide_fields_container`. +For more information on using the fields container, see :ref:`ref_user_guide_fields_container`. -Chaining Operators -~~~~~~~~~~~~~~~~~~ - -To create more complex operations and customizable results, operators can be -chained together to create workflows. +Chain operators +~~~~~~~~~~~~~~~ -With the large library of ``Operators`` that DPF offers, customizing results -to get a specific output is very easy. +To create more complex operations and customizable results, you can chain operators +together to create workflows. Using DPF's large library of operators, you can +customize results to get a specific output. While manually customizing results on the Python side is far less efficient than using operators, for a very small model, it is acceptable to bring all @@ -230,7 +231,7 @@ displacement data on the client side to compute the maximum: displacement = model.results.displacement() fc = displacement.outputs.fields_container() - # Compute the maximum displacement of the first field using numpy. + # Compute the maximum displacement of the first field using NumPy. # Note that the data returned is a numpy array. disp = fc[0].data @@ -245,7 +246,7 @@ displacement data on the client side to compute the maximum: array([8.20217171e-07, 6.26510654e-06, 0.00000000e+00]) -On an industrial model, you should use: +On an industrial model, however, you should use code like this: .. code-block:: python @@ -268,12 +269,12 @@ On an industrial model, you should use: array([8.20217171e-07, 6.26510654e-06, 0.00000000e+00]) -Here, only the maximum displacements in the X, Y, and Z components -are transferred and returned as a numpy array. +In the preceding example, only the maximum displacements in the X, Y, and Z +components are transferred and returned as a numpy array. -For small data sets, you can compute the maximum of the array in Numpy. -While there may be times where having the entire data array for a given -result type is necessary, many times it is not necessary. In these +For small data sets, you can compute the maximum of the array in NumpPy. +While there might be times where having the entire data array for a given +result type is necessary, many times it is not necessary. In these cases, it is faster not to transfer the array to Python but rather to compute the maximum of the fields container within DPF and then return the result to Python. @@ -302,7 +303,7 @@ While this last approach is more verbose, it can be useful for operators having several matching inputs or outputs. -Types of Operators +Types of operators ~~~~~~~~~~~~~~~~~~ DPF provides three main types of operators: @@ -311,7 +312,7 @@ DPF provides three main types of operators: - Operators for exporting data *************************************** -Operators for Importing or Reading Data +Operators for importing or reading data *************************************** These operators provide for reading data from solver files or from standard file types: @@ -322,10 +323,10 @@ These operators provide for reading data from solver files or from standard file - For Abaqus, ODB files are supported. To read these files, different readers are implemented as plugins. -Plugins can be loaded on demand in any DPF's scripting language with the "load library" methods. +Plugins can be loaded on demand in any DPF scripting language with the "load library" methods. File readers can be used generically thanks to DPF's result providers, which means that the same operators can be used for any file types. -For example, read a displacement or a stress for any file: +This example shows how to read a displacement or a stress for any file: .. code-block:: python @@ -342,7 +343,7 @@ Fields can be imported from CSV, VTK, and HDF5 files. For an example of importing and exporting a CSV file, see :ref:`ref_basic_load_file_example`. ******************************* -Operators for Transforming Data +Operators for transforming data ******************************* A field is the main data container in DPF. Most of the operators that transform @@ -350,7 +351,7 @@ data take a field or a fields container as input and return a transformed field or fields container as output. You can perform analytic, averaging, or filtering operations on simulation data. -For example, after creation of a field, use scaling and filtering +For example, after creation of a field, you can use scaling and filtering operators: .. code-block:: python @@ -402,16 +403,16 @@ operators: **************************** -Operators for Exporting Data +Operators for exporting data **************************** After using DPF to read or transform simulation data, you might want -to export the results in a given format to either use it in another -environment or save it for future use with DPF. Supported file formats -for export include VTK, H5, CSV, and TEXT (serializer operator). Export +to export the results in a given format to either use them in another +environment or save them for future use with DPF. Supported file formats +for export include VTK, H5, CSV, and TXT (serializer operator). Export operators often match with import operators, allowing you to reuse data. -The "serialization" operators menu lists the available import and export -operators. +In :ref:`ref_dpf_operators_reference`, the **Serialization** category +displays available import and export operators. .. code-block:: python @@ -455,7 +456,7 @@ Python client is not on the same machine as the server: Downloading...: 759 KB| -API Reference +API reference ~~~~~~~~~~~~~ For a list of all operators in DPF, see :ref:`ref_dpf_operators_reference` or :ref:`ref_operators_package`. For more information about the diff --git a/docs/source/user_guide/plotting.rst b/docs/source/user_guide/plotting.rst index 20c5f6d283..08a04281ba 100644 --- a/docs/source/user_guide/plotting.rst +++ b/docs/source/user_guide/plotting.rst @@ -10,12 +10,13 @@ simplify plotting. For more information, see the `PyVista Documentation `_. -Plotting the Mesh from the Model Object +Plot the mesh from the ``Model`` object --------------------------------------- -The :py:meth:`Model.plot() ` method can -be used to plot the mesh of the model immediately after loading it. In -this example, a simple pontoon mesh is downloaded from the -internet and loaded using the :class:ansys.dpf.core.model` class: +The :meth:`Model.plot() ` method +plots the mesh of the model immediately after loading it. + +This example downloads a simple pontoon mesh from the internet and uses the +:class:`ansys.dpf.core.model` class to load it: .. code:: python @@ -32,11 +33,12 @@ lighting enabled. For a list of all keyword arguments, see `plot `_. -Plotting Using the Meshed Region --------------------------------- -The :py:meth:`MeshedRegion.plot() ` +Plot the meshed region +----------------------- +The :meth:`MeshedRegion.plot() ` method plots the meshed region. If the meshed region is generated from the model's -metadata, the plot generated is identical to the plot generated by ``Model.plot()``. +metadata, the plot generated is identical to the plot generated by the +:meth:`Model.plot() ` method. Plot the meshed region: @@ -48,7 +50,9 @@ Plot the meshed region: .. image:: ../images/plotting/pontoon.png When a field is provided as the first argument, the mesh is plotted -using these values. This example extracts the nodal strain in the X direction: +using these values. + +This example extracts the nodal strain in the X direction: .. code:: python @@ -74,8 +78,8 @@ using these values. This example extracts the nodal strain in the X direction: .. note:: - Only fields with ``Elemental`` and ``Nodal`` locations are - supported currently. Use the :py:meth:`to_nodal - ` operator to - convert to nodal or the :class:`ansys.dpf.core.operators.averaging.nodal_to_elemental` - class to convert to elemental. + Only fields with ``Nodal`` and ``Elemental`` locations are + supported. Use the :meth:`to_nodal ` + operator to convert to the ``Nodal`` location or the + :class:`ansys.dpf.core.operators.averaging.nodal_to_elemental` + class to convert to the ``Elemental`` location. diff --git a/docs/source/user_guide/stepbystep.rst b/docs/source/user_guide/stepbystep.rst index c2a2fc85b2..16383a09ac 100644 --- a/docs/source/user_guide/stepbystep.rst +++ b/docs/source/user_guide/stepbystep.rst @@ -1,37 +1,37 @@ .. _user_guide_stepbystep: -======================= -Using DPF: Step by Step -======================= -The goal of using DPF is to transform simulation data into output data +========= +DPF usage +========= +The goal of using DPF is to transform simulation data into output data that can be used to visualize and analyze simulation results. -This process has two main steps: +There are two main steps to achieve this goal: - Step 1: :ref:`define_sim_data` - Step 2: :ref:`transform_the_data` .. _define_sim_data: -Define Simulation Data +Define simulation data ---------------------- -Data can come from two sources: +Data can come from two sources: -- ``Simulation result files``. DPF automatically recognizes the fields in result files. When using a result file as input you must specify the data source file(s). -- ``Manual input in DPF``. You create fields of data in DPF. +- **Simulation result files:** DPF automatically recognizes the fields in simulation + result files. When using result files as input, you specify the data source by + defining where the result files are located. +- **Manual input in DPF:** You can create fields of data in DPF. -Once a data source has been selected, or fields have been manually defined, -you create field containers (if applicable) and define scopings to identify -the subset of data that you want to evaluate. +Once you have specify data sources or manually create fields in PDF, +you can create field containers (if applicable) and define scopings to +identify the subset of data that you want to evaluate. -Selecting a Data Source -~~~~~~~~~~~~~~~~~~~~~~~ -When you want to evaluate the data in simulation result files, -you must specify the ``data source``. This is folder containing analysis -results. Typically the data source consists of a path to the result or -data files. +Specify the data source +~~~~~~~~~~~~~~~~~~~~~~~~ +To evaluate the data in simulation result files, you specify the data source by defining +where the results files are located. -**Creating a data source and setting the result file path** +This example shows how to define the data source: .. code-block:: python @@ -41,25 +41,25 @@ data files. data_sources.result_files ['/tmp/file.rst'] -To evaluate data files, they need to be opened. To open data files, you -define ``streams``. A stream is an entity that contains the data sources. -Streams keep the data files open and keep some data cached to make the next -evaluation faster. Streams are particularly convenient when using large files. -They save time when opening and closing files. When a stream is released, -files are closed. +To evaluate data files, they must be opened. To open data files, you +define *streams*. A stream is an entity that contains the data sources. +Streams keep the data files open and keep some data cached to make the next +evaluation faster. Streams are particularly convenient when using large +data files. They save time when opening and closing data files. When a stream +is released, the data files are closed. -Defining Fields -~~~~~~~~~~~~~~~ -A ``field`` is a container of simulation data. In numerical simulations, -results data is defined by values associated with entities: +Define fields +~~~~~~~~~~~~~ +A *field* is a container of simulation data. In numerical simulations, +result data is defined by values associated with entities: .. image:: ../images/drawings/values-entities.png -Therefore, a field of data may look something like this: +Therefore, a field of data might look something like this: .. image:: ../images/drawings/field.png -**Creating a field from scratch** +This example shows how to define a field from scratch: .. code-block:: python @@ -70,23 +70,24 @@ Therefore, a field of data may look something like this: field_with_classic_api.location = locations.nodal field_with_factory = fields_factory.create_scalar_field(10) -In DPF, field data is always associated with its ``scoping`` and ``support``, -making it a self-describing piece of data. A field can also be defined by its -dimensionality, unit, and location. To learn more see :ref:`user_guide_concepts`. +In DPF, field data is always associated with its scoping and support, making +a field a self-describing piece of data. A field can also be defined by its +dimensionality, unit, and location. For more information, see :ref:`user_guide_concepts`. -Defining Scoping -~~~~~~~~~~~~~~~~ -In most cases you will not want to work with an entire field, but rather a -subset of entities in the field. To achieve this you define ``scoping`` for -the field. Scoping is a set of entity IDs on a location. For example, this may -be a set of mesh IDs, geometric entity IDs, time domain, frequency domain, -and so on. You specify the set of entities by defining a range of IDs: +Define scopings +~~~~~~~~~~~~~~~ +In most cases, you do not want to work with an entire field but rather with a +subset of entities in the field. To achieve this, you define a scoping for +the field. A scoping is a set of entity IDs on a location. For example, this can +be a set of mesh IDs, geometric entity IDs, time domain, or frequency domain. + +You specify the set of entities by defining a range of IDs: .. image:: ../images/drawings/scoping-eg.png -A scoping must be defined prior to its use in the transformation data workflow. +You must define a scoping prior to its use in the transformation data workflow. -**Creating a mesh scoping** +This example shows how to define a mesh scoping: .. code-block:: python @@ -102,26 +103,27 @@ A scoping must be defined prior to its use in the transformation data workflow. my_scoping.location = "Nodal" #optional my_scoping.ids = list(range(1,11)) -Defining Field Containers -~~~~~~~~~~~~~~~~~~~~~~~~~ -A ``field container`` holds a set of fields. It is used mainly for -transient, harmonic, modal, or multi-step analyses. For example: +Define field containers +~~~~~~~~~~~~~~~~~~~~~~~ +A *field container* holds a set of fields. It is used mainly for +transient, harmonic, modal, or multi-step analyses. This image +explains its structure: .. image:: ../images/drawings/field-con-overview.png -A field container is a vector of fields. Fields are ordered with labels -and IDs. Most commonly, the field container is scoped on the “time” label +A field container is a vector of fields. Fields are ordered with labels +and IDs. Most commonly, a field container is scoped on the time label, and the IDs are the time or frequency sets: .. image:: ../images/drawings/field-con.png You can define a field container in multiple ways: -- Extract labeled data from a results file -- Create a field container from a CSV file -- Convert existing fields to a field container +- Extract labeled data from a result file. +- Create a field container from a CSV file. +- Convert existing fields to a field container. -**Creating a field container from scratch** +This example shows how to define a field container from scratch: .. code-block:: python @@ -135,60 +137,56 @@ You can define a field container in multiple ways: mscop = {"time":i+1,"complex":1} fc.add_field(mscop,dpf.Field(nentities=i+10)) -Some operators can operate directly on field containers instead of fields. -Field containers are identified by the “FC” suffix in their name. -Operators and field containers are explained in more detail +Some operators can operate directly on field containers instead of fields. +Field containers are identified by ``fc`` suffixes in their names. +Operators and field containers are explained in more detail in :ref:`transform_the_data`. .. _transform_the_data: -Transform the Data +Transform the data ------------------ -Once you have defined the simulation data to be evaluated, you use operators -to transform the data to obtain the desired output. Operators can be chained -together to create simple or complex data transformation workflows. +Once you have defined the simulation data to evaluate, you use operators +to transform the data to obtain the desired output. You can chain operators +together to create simple or complex data transformation workflows. -Using Operators -~~~~~~~~~~~~~~~ -Operators can be used to import, export, transform, and analyze data. +Use operators +~~~~~~~~~~~~~ +You use operators to import, export, transform, and analyze data. -An operator is analogous to an integrated circuit in electronics which -has a set of input and output pins. Pins allow data to be passed to -each operator. +An operator is analogous to an integrated circuit in electronics. It +has a set of input and output pins. Pins provide for passing data to +and from operators. -An operator takes input from a field, field container, or scoping using -an input pin, and computes output based on what the operator is designed -to do. The output is passed to a field or field container using -an output pin. +An operator takes input from a field, field container, or scoping using +an input pin. Based on what it is designed to do, the operator computes +an output that it passes to a field or field container using an output pin. .. image:: ../images/drawings/circuit.png -To use operators you should consult the online help: - -#. In the table of contents, select ``Operators``. -#. To search for an operator, type a keyword in the ``Search`` field or - browse each category to display the list of available operators for - each category: +Comprehensive information on operators is available in :ref:`ref_dpf_operators_reference`. +In the **Available Operators** area, you can either type a keyword in the **Search** option +or browse by operator categories: .. image:: ../images/drawings/help-operators.png -The help page for each operator describes how the operator transforms data, -indicates the required input data, and provides usage examples. +The page for each operator describes how the operator transforms data, +indicates input and output data, and provides usage examples. -Defining Operators -~~~~~~~~~~~~~~~~~~ -An operator definition consists of three steps: +Define operators +~~~~~~~~~~~~~~~~ +Defining an operator consists of three steps: -- Operator instantiation -- Input definition -- Output storage +#. Instantiate the operator. +#. Define the inputs. +#. Store the output. -Each operator’s help page provides a sample definition in each available -language (IronPython, CPython, C++). +This image shows how the page for an operator provides a usage example for each available +language (IronPython, CPython, and C++). .. image:: ../images/drawings/operator-def.png -**Creating an operator from a model** +This example shows how to define an operator from a model: .. code-block:: python @@ -197,24 +195,24 @@ language (IronPython, CPython, C++). model = Model(examples.static_rst) disp_oper = model.results.displacement() -Defining Workflows -~~~~~~~~~~~~~~~~~~ -In most cases, using a single operator is not sufficient to obtain the -desired result. In DPF you can chain operators together to create a complete -data transformation workflow, enabling you to perform all operations necessary -to get the result you want. +Define workflows +~~~~~~~~~~~~~~~~ +In most cases, using a single operator is not sufficient to obtain the +desired result. In DPF, you can chain operators together to create a complete +data transformation workflow, enabling you to perform all operations necessary +to get the result that you want. -In a workflow, the output pins of one operator are connected to the input pins -of another operator, allowing output data from one operator to be passed as -input to the other operator. +In a workflow, the output pins of one operator can be connected to the input pins +of another operator, allowing output data from one operator to be passed as +input to another operator. -The following example illustrates how you would get the norm of a resulting -vector from the dot product of two vectors: +This image shows how you would get the norm of a resulting vector from the +dot product of two vectors: -.. image:: ../images/drawings/connect-operators.png +.. image:: ../images/drawings/connect-operators.png -**Creating a generic workflow computing the minimum of displacement by chaining the 'U'** -**and 'min_max_fc' operators** +This example shows how to define a generic workflow that computes the minimum +of displacement by chaining the ``U`` and ``min_max_fc`` operators: .. code-block:: python @@ -233,4 +231,4 @@ vector from the dot product of two vectors: data_src = dpf.DataSources(examples.multishells_rst) workflow.connect("data_sources", data_src) min = workflow.get_output("min", dpf.types.field) - max = workflow.get_output("max", dpf.types.field) \ No newline at end of file + max = workflow.get_output("max", dpf.types.field) diff --git a/docs/source/user_guide/troubleshooting.rst b/docs/source/user_guide/troubleshooting.rst index b2a25655af..6e8cbede87 100644 --- a/docs/source/user_guide/troubleshooting.rst +++ b/docs/source/user_guide/troubleshooting.rst @@ -3,45 +3,48 @@ =============== Troubleshooting =============== -This section explains how to resolve the most common issues encountered with ``pydpf-core``. -It also includes suggestions for improving scripts. - -Using the Server ----------------- - -Starting DPF Server -~~~~~~~~~~~~~~~~~~~ -While using the DPF-Python API to start the server with :py:meth:`start_local_server() -` or while starting the server manually (with ``Ans.Dpf.Grpc.sh`` -or ``Ans.Dpf.Grpc.bat``), a Python error might occur: "TimeoutError: Server did not start in 10 seconds". +This page explains how to resolve the most common issues encountered when +using PyDPF-Core. It also includes suggestions for improving scripts. + +Server issues +------------- + +Start the DPF server +~~~~~~~~~~~~~~~~~~~~~ +When using PyDPF-Core to start the server with the +:py:meth:`start_local_server() ` method +or when starting the server manually with the ``Ans.Dpf.Grpc.sh``or ``Ans.Dpf.Grpc.bat`` +file, a Python error might occur: ``TimeoutError: Server did not start in 10 seconds``. This kind of error might mean that the server or its dependencies were not found. Ensure that -the environment variable ``AWP_ROOT{VER}`` is set, where VER=212, 221, .... +the environment variable ``AWP_ROOT{VER}`` is set, where ``VER``is the three-digit numeric +format for the version, such as ``221`` or ``222``. -Connecting to DPF Server -~~~~~~~~~~~~~~~~~~~~~~~~ -If an issue appears while using the ``pydpf-core`` API to connect to an initialized server with :py:meth:`connect_to_server() -`, ensure that the IP address and port number that are set as parameters -are applicable for a DPF server started on the network. +Connect to the DPF server +~~~~~~~~~~~~~~~~~~~~~~~~~ +If an issue appears while using Py-DPF code to connect to an initialized server with the +:py:meth:`connect_to_server() ` method, ensure that the +IP address and port number that are set as parameters are applicable for a DPF server started +on the network. -Importing pydpf-core module -~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Assume that you are importing the ``pydpf-core`` module: +Import the ``pydpf-core`` package +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Assume that you are importing the ``pydpf-core`` package: .. code-block:: default from ansys.dpf import core as dpf -If an error lists missing modules, see the compatibility paragraph of :ref:`ref_getting_started`. -The module `ansys.grpc.dpf `_ should always be synchronized with its server -version. +If an error lists missing modules, see :ref:`ref_ref_compatibility`. +The `ansys.grpc.dpf `_ module +should always be synchronized with its server version. -Using the Model ---------------- +Model issues +------------ -Invalid UTF-8 Error +Invalid UTF-8 error ~~~~~~~~~~~~~~~~~~~ -Assume that you are trying to access the :class:`ansys.dpf.core.model.Model`. -The following error can be raised: +Assume that you are trying to access the :class:`ansys.dpf.core.model.Model` class. +The following error might be raised: .. code-block:: default @@ -49,41 +52,40 @@ The following error can be raised: String field 'ansys.api.dpf.result_info.v0.ResultInfoResponse.user_name' contains invalid UTF-8 data when serializing a protocol buffer. Use the 'bytes' type if you intend to send raw bytes. -This will prevent the model from being accessed. To avoid this error, ensure that you are using -a PyDPF-Core version higher than 0.3.2. In this case, a warning will still be raised, but it should not -prevent the use of the Model. +Invalid UTF-8 data is preventing the model from being accessed. To avoid this error, ensure that +you are using PyDPF-Core version 0.3.2 or later. While a warning is still raised, the invalid UTF-8 +data should not prevent you from using the :class:`ansys.dpf.core.model.Model` class. -Then, with result files reproducing this issue, to avoid the warning to pop up, you can use: +Then, with result files reproducing this issue, you can prevent the warning from being raised with: .. code-block:: default from ansys.dpf import core as dpf dpf.settings.set_dynamic_available_results_capability(False) -However, this will disable the reading and generation of the available results of the model: static prewritten -available results will be used instead. +However, the preceding code disables the reading and generation of the available results for the model. +Any static results that are available for the model are used instead. - - -Performance Issues +Performance issues ------------------ -Getting and Setting a Field's Data -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Accessing or modifying field data :py:class:`Field` entity by entity can -be slow if the field's size is large or if the server is far from the Python client. To improve performance, -use :py:meth:`as_local_field()` in a context manager. -An example can be found in :ref:`ref_use_local_data_example`. +Get and set a field's data +~~~~~~~~~~~~~~~~~~~~~~~~~~ +Using the :py:class:`Field` class to get or set field data entity +by entity can be slow if the field's size is large or if the server is far from the Python client. +To improve performance, use the :py:meth:`as_local_field()` +method in a context manager to bring the field data from the server to your local machine. For an +example, see :ref:`ref_use_local_data_example`. -Slow Autocompletion in Notebooks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Autocompletion in notebooks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Autocompletion in Jupyter notebook can sometimes be slow for large models. The interpreter might -evaluate getters of some properties when the tab key is pressed. To disable this capability use -:py:meth:`disable_interpreter_properties_evaluation()`: +evaluate the getters of some properties when the tab key is pressed. To disable this capability, use the +:py:meth:`disable_interpreter_properties_evaluation()` +method: .. code-block:: default from ansys.dpf import core as dpf dpf.settings.disable_interpreter_properties_evaluation() - diff --git a/docs/source/user_guide/waysofusing.rst b/docs/source/user_guide/waysofusing.rst index be3307037d..22fe934094 100644 --- a/docs/source/user_guide/waysofusing.rst +++ b/docs/source/user_guide/waysofusing.rst @@ -1,21 +1,26 @@ .. _user_guide_waysofusing: -================= -Ways of Using DPF -================= +======================= +DPF scripting languages +======================= DPF is available as a standalone tool and as a tool in Ansys Mechanical. Each one uses a different language for scripting, so you should decide whether you want to use standalone DPF or DPF in Mechanical before creating any scripts. -``Standalone DPF`` uses CPython and can be accessed via any Python console. -Data can be exported to universal file formats (VTK, hdf5, txt files). -Use it to generate TH-plots, screenshots, animations, and so on, or create -custom results plots using numpy and matplotlib libraries. +CPython +------- +Standalone DPF uses CPython and can be accessed with any Python console. +Data can be exported to universal file formats, such as VTK, HDF5, and TXT +files. You can use it to generate TH-plots, screenshots, and animations or +to create custom result plots using `numpy `_ +and `matplotlib `_ packages. .. image:: ../images/drawings/dpf-reports.png -``DPF in Mechanical`` uses IronPython and is accessible via the ACT console. +IronPython +---------- +DPF in Mechanical uses IronPython and is accessible with the **ACT Console**. Use it to perform custom postprocessing and visualization of results directly within the Mechanical application. diff --git a/docs/source/user_guide/xmlfiles.rst b/docs/source/user_guide/xmlfiles.rst index 38195d9481..9d7bbe402c 100644 --- a/docs/source/user_guide/xmlfiles.rst +++ b/docs/source/user_guide/xmlfiles.rst @@ -1,18 +1,20 @@ .. _user_guide_xmlfiles: ============= -DPF XML Files +DPF XML files ============= -This section describes the XML files associated with DataProcessingCore -and DPF plugins. These DPF files work on both Windows and Linux -operating systems. The files can contain content for both operating systems. +This page describes the ``DataProcessingCore.xml``and ``Plugin.xml`` XML files +provided with the DPF software. These XML files work on both Linux and Windows +because they contain content for both of these operating systems. -The XML files must be located alongside the plugin DLL files on Windows, -or SO files on Linux. +These XML files must be located alongside the plugin DLL files on Windows or +SO files on Linux. -DataProcessingCore File ------------------------ -The content and format of the DataProcessingCore.xml file is as follows: +``DataProcessingCore.xml`` file +------------------------------- +The ``DataProcessingCore.xml`` file provides for configuring the plugins to load. + +Here is the content of this XML file: .. code-block:: html @@ -56,79 +58,92 @@ The content and format of the DataProcessingCore.xml file is as follows: -The DataProcessingCore.xml file is provided with the DPF software. -Modify the file carefully to ensure that the DPF software operates correctly. -Some of the sections in the file are optional, and many of the sections -have Windows and Linux specific subsections. +In this XML file, some of the elements are optional, and many of the +elements have Linux-specific versus Windows-specific child elements. + +.. caution:: + To ensure that the DPF software operates correctly, modify this XML file + carefully. All paths specified in this XML file must adhere to the path + conventions of the respective operating system. For Linux paths, use + forward slashes (/). For Windows paths, use backward slashes (\\). + -The ```` section is used only for defining the ROOT folder -of the Ansys software. This is done with an ```` tag. -The root folder of the Ansys software ends with the v### folder. -It could be something like ``C:\ansys_inc\v222``. The ANSYS_ROOT_FOLDER tag -defines a variable like an environment variable that can be used in the other -XML files. You might use it to find required third party software. +```` element +~~~~~~~~~~~~~~~~~~~~~~~~~ +The ```` element is used only for defining the root folder +of the Ansys software. Its child ```` elements can +define the root folders for Ansys software installed on Linux and on Windows. -If the ANSYS_ROOT_FOLDER tag is not defined within the DataProcessing.xml file, -the root folder is determined by reading the AWP_ROOT### environment -variable specific to the version of the DPF code. For example, if you are -using V222 DPF code, it looks for AWP_ROOT222 to find the root folder. +The path for the root folder ends with Ansys version information, ``v###``, +where ``###`` is the three-digit format for the installed version. For example, +on Windows, the path for the root folder for Ansys 2022 R2likely looks something +like ``C:\Program Files\ANSYS Inc\v222``. -If the ANSYS_ROOT_FOLDER tag is still not defined, the code attempts to find the -root folder relative to the DataProcessingCore DLL/SO file. This only works -if DataProcessingCore is located in its default location. +The ``ANSYS_ROOT_FOLDER`` element is like an environment variable. You can use +this element in other XML files. For example, you might use it to find required +third-party software. -The ```` section is used for loading the default plugins. -The further subdividing of the plugins into ```` or ```` -sections is optional. The ```` section would only be used with a -debug version of the DataProcessingCore DLL/SO file. +If the ``ANSYS_ROOT_FOLDER`` element is not defined in the ``DataProcessing.xml`` +file, the root folder is determined by reading the ``AWP_ROOT###`` environment +variable specific to your installed Ansys version. For example, if you are +using Ansys 2022 R2, it looks for ``AWP_ROOT222`` to find the root folder. -The plugins to load are defined within their own section that is named -by a tag like ```` or ````. This tag is used as -the ``Key`` when loading the plugin. Each plugin must have a unique key. +If the ``ANSYS_ROOT_FOLDER`` element is still not defined, an attempt is made to +find the root folder relative to the ``DataProcessingCore`` DLL or SO file. This +works only if the ``DataProcessingCore.xml`` file is located in its default +location. -Within the Key section are two tags that define the location of the plugin -and the method of loading. The location is defined by the ```` tag -and the loading method is defined by the ```` tag. -These are used as arguments to the loading plugin mechanism. +```` element +~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The ```` element defines the plugins to load. The ```` or +``Windows`` child element contains the operating system for plugins defined +in child elements. -Currently, only ``LoadOperators`` is supported for the ```` tag. -This loads all operators within the plugin. +The ``native`` element defines DPF native operators. The further subdividing of +plugins into ```` or ```` elements is optional. The ```` +element, for example, would only be used with a debug version of the +``DataProcessingCore DLL/SO`` file. -The ```` tag contains the location of the plugin to load. -The normal mechanism that the OS uses to find a DLL/SO is used. -The DLL could be in the Windows path, or the SO could be within -the Linux LD_LIBRARY_PATH. +The element names for plugins, such as ```` and ````, are used as +*keys* when loading plugins. Each plugin must have a unique key. -The ```` tag contains a value that must be set to -``true`` or ``false``. It defines if the PLUGIN.XML file -(defined in next section) will be used to load the plugin or not. -This tag is optional. The default value is ``true``. +The element for each plug-in has child elements: -Any path specified with the XML file must adhere to the path conventions -of the OS. “\\” for Windows and “/” for Linux. +- ````: Contains the location of the plugin to load. The normal mechanism + that the operating system uses to find a DLL or SO file is used. The DLL + file could be in the Windows path, or the SO file could be in the Linux + ``LD_LIBRARY_PATH`` system environment variable. +- ````: Contains how the plugin is loaded. Only ``LoadOperators`` is + supported. It loads all operators within the plugin. +- ````: Contains a ``true`` or ``false`` value that indicates + whether to use the ``PLUGIN.XML`` file defined in the next element to load + the plugin. This element is optional. The default value is ``true``. -Two pre-defined variables can be used to provide an absolute path to -the plugin: +To provide an absolute path to a plugin, you can use these predefined variables: -- ANSYS_ROOT_FOLDER as defined above. -- THIS_XML_FOLDER defining the location of where the current XML file is located. In this case DataProcessingCore.xml. +- ``ANSYS_ROOT_FOLDER``, which is described in the preceding section. +- ``THIS_XML_FOLDER``, which defines the location of where the current XML file + is located. In this case, it defines the location of the ``DataProcessingCore.xml`` + file. -Any other environment variable could be used. If you always had your plugins -in a folder defined by the environment variable MY_PLUGINS, -you could use that in the XML file. +You can also use any other environment variable. For example, if you always have your +plugins in a folder defined by a ``MY_PLUGINS`` environment variable, you could use +it in the XML file. -The environment variables are specified the same way as ANSYS_ROOT_FOLDER -or THIS_XML_FOLDER. They are defined as $(…). +You specify environment variables in the same way as the ``ANSYS_ROOT_FOLDER`` +or ``THIS_XML_FOLDER`` variable. They are defined as $(…). -In the Ansys installation, the default DataProcessingCore.xml file is located -next to the DataProcessingCore DLL/SO file. -If you want to use a different one, you can initialize DPF using a -specific DataProcessingCore.xml file. +In the Ansys installation, the default ``DataProcessingCore.xml`` file is located +next to the ``DataProcessingCore`` DLL or SO file. If you want to use a different +one, you can initialize DPF using a specific ``DataProcessingCore.xml`` file. -PLUGIN.XML File ---------------- -The content and format of the Plugin.xml file is as follows: +``Plugin.xml`` file +------------------- +The ``Plugin.xml`` file allows you to configure a specific environment for loading a +plugin. + +Here is the content of this XML file: .. code-block:: html @@ -145,10 +160,10 @@ The content and format of the Plugin.xml file is as follows: -This file allows for a specific environment to be configured for loading a plugin. -The ```` section within the plugin-specific XML file is defined -the same way as the DataProcessingCore.xml file. -Any environment variables defined or used have the values at the time they are -defined or used. You can effectively define a variable multiple times -and keep appending it. +The ```` element within this XML file is defined the same way +as the ``DataProcessingCore.xml`` file. + +Any environment variables that are defined or used have the values at the time +that they are defined or used. You can effectively define a variable multiple times +and keep appending it. diff --git a/docs/styles/Vocab/ANSYS/accept.txt b/docs/styles/Vocab/ANSYS/accept.txt index 6e8279b8a4..2c04c910dc 100644 --- a/docs/styles/Vocab/ANSYS/accept.txt +++ b/docs/styles/Vocab/ANSYS/accept.txt @@ -1,3 +1,37 @@ ANSYS Ansys ansys +Abaqus +componentization +Componentization +core +Core +CPython +Gaussian +getters +gltf +GLTF +hexa +IronPython +matplotlib +Mises +MSUP +numpy +postprocess +postprocessing +Postprocessing +protobuf +psutil +PyDPF +Pythonic +pyvista +PyVista +recursivity +Remotable +Reusability +Rz +scopings +serializer +substep +tqdm +von \ No newline at end of file diff --git a/examples/00-basic/00-basic_example.py b/examples/00-basic/00-basic_example.py index a19aa18156..a2b53bd4ed 100644 --- a/examples/00-basic/00-basic_example.py +++ b/examples/00-basic/00-basic_example.py @@ -1,7 +1,7 @@ """ .. _ref_basic_example: -Basic DPF-Core Usage +Basic DPF-Core usage ~~~~~~~~~~~~~~~~~~~~ This example shows how to open a result file and do some basic postprocessing. @@ -20,7 +20,7 @@ from ansys.dpf.core import examples ############################################################################### -# Next, open an example and print out the ``model`` object. The +# Next, open an example and print out the ``model`` object. The # ``Model`` class helps to organize access methods for the result by # keeping track of the operators and data sources used by the result # file. @@ -33,7 +33,7 @@ # - Number of results # # Also, note that the first time you create a DPF object, Python -# automatically attempts to start the server in the background. If you +# automatically attempts to start the server in the background. If you # want to connect to an existing server (either local or remote), use # :func:`dpf.connect_to_server`. @@ -41,28 +41,28 @@ print(model) ############################################################################### -# Model Metadata +# Model metadata # ~~~~~~~~~~~~~~ # Specific metadata can be extracted from the model by referencing the -# model's ``metadata`` property. For example, to print only the +# model's ``metadata`` property. For example, to print only the # ``result_info``: metadata = model.metadata print(metadata.result_info) ############################################################################### -# To print the mesh region: +# Print the mesh region: print(metadata.meshed_region) ############################################################################### -# To print the time or frequency of the results: +# Print the time or frequency of the results: print(metadata.time_freq_support) ############################################################################### -# Extracting Displacement Results -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Extract displacement results +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # All results of the model can be accessed through the ``results`` # property, which returns the :class:`ansys.dpf.core.results.Results` # class. This class contains the DPF result operators available to a diff --git a/examples/00-basic/01-basic_operators.py b/examples/00-basic/01-basic_operators.py index b77b958fa4..4d49754d38 100644 --- a/examples/00-basic/01-basic_operators.py +++ b/examples/00-basic/01-basic_operators.py @@ -1,11 +1,11 @@ """ .. _ref_basic_operators_example: -Operators Overview +Operators overview ~~~~~~~~~~~~~~~~~~ In DPF, operators provide the primary method for interacting with and extracting -results. Within DPF-Core, operators are directly exposed with +results. Within DPF-Core, operators are directly exposed with the ``Operators`` class as well as wrapped within several other convenience classes. @@ -29,7 +29,7 @@ ############################################################################### # Next, create a raw displacement operator ``"U"``. Each operator # contains ``input`` and ``output`` pins that can be connected to -# various sources to include other operators. This allows operators +# various sources to include other operators. This allows operators # to be "chained" to allow for highly efficient operations. # # To print out the available inputs and outputs of the @@ -39,7 +39,7 @@ print(disp_op.outputs) ############################################################################### -# Compute the Maximum Normalized Displacement +# Compute the maximum normalized displacement # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # This example demonstrate how to chain various operators. It connects the input # of the operator to the data sources contained within the ``model`` object and @@ -64,10 +64,10 @@ print(field_max.data) ############################################################################### -# Wrapped Operators +# Wrapped operators # ~~~~~~~~~~~~~~~~~ # The ``model.results`` property contains all the wrapped operators -# available for a given result. This is provided out of convenience +# available for a given result. This is provided out of convenience # because all operators may not be available for a given result. Consequently, # it is much easier to reference available operators by first running: print(model.results) @@ -92,7 +92,7 @@ print(model.metadata.meshed_region.plot(disp_op.outputs.fields_container())) ############################################################################### -# Scripting Operator Syntax +# Scripting operator syntax # ~~~~~~~~~~~~~~~~~~~~~~~~~~ # Because DPF provides a scripting syntax, knowing # an operator's "string name" is not mandatory. diff --git a/examples/00-basic/02-basic_field_containers.py b/examples/00-basic/02-basic_field_containers.py index bf135cc143..c6c0168664 100644 --- a/examples/00-basic/02-basic_field_containers.py +++ b/examples/00-basic/02-basic_field_containers.py @@ -1,7 +1,7 @@ """ .. _ref_basic_field_example: -Field and Field Containers Overview +Field and field containers overview ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In DPF, the field is the main simulation data container. During a numerical simulation, result data is defined by values associated to entities @@ -10,17 +10,22 @@ Because field data is always associated to its scoping and support, the field is a self-describing piece of data. A field is also defined by its parameters, such as dimensionality, unit, and location. -For example, a field can describe a displacement vector or norm, stress or strain -tensor, stress or strain equivalent, or minimum or maximum -over time of any result. A field can be defined on a complete model or -on only certain entities of the model based on its scoping. The data -is stored as a vector of double values, and each elementary entity has -a number of components. For example, a displacement will have three -components, and a symmetrical stress matrix will have six components. +For example, a field can describe any of the following: + +- Displacement vector +- Norm, stress, or strain tensor +- Stress or strain equivalent +- Minimum or maximum over time of any result. + +A field can be defined on a complete model or on only certain entities +of the model based on its scoping. The data is stored as a vector of +double values, and each elementary entity has a number of components. +For example, a displacement has three components, and a symmetrical +stress matrix has six components. In DPF, a fields container is simply a collection of fields that can be indexed, just like a Python list. Operators applied to a fields -container will have each individual field operated on. Fields +container have each individual field operated on. Fields containers are outputs from operators. First, import necessary modules: @@ -51,17 +56,17 @@ print(field) ############################################################################### -# Extracting Data from a Field -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Extract data from a field +# ~~~~~~~~~~~~~~~~~~~~~~~~~ # You can extract all the data from a given field using the ``data`` -# property. This returns a ``numpy`` array. +# property. This returns a ``numpy`` array. print(field.data) ############################################################################### # While it might seem preferable to work entirely within ``numpy``, # DPF runs outside of Python and potentially even on a -# remote machine. Therefore, the transfer of unnecessary data between +# remote machine. Therefore, the transfer of unnecessary data between # the DPF instance and the Python client leads to inefficient # operations on large models. Instead, you should use DPF operators to # assemble the necessary data before recalling the data from DPF. @@ -80,7 +85,7 @@ ############################################################################### # Note that the numpy array does not retain any information about the -# field it describes. Using the DPF ``max`` operator of the field does +# field it describes. Using the DPF ``max`` operator of the field does # retain this information. max_field = field.max() print(max_field) diff --git a/examples/00-basic/03-create_entities.py b/examples/00-basic/03-create_entities.py index fa01dbffa1..5f9c8356b2 100644 --- a/examples/00-basic/03-create_entities.py +++ b/examples/00-basic/03-create_entities.py @@ -1,8 +1,8 @@ """ .. _ref_create_entities_example: -Create Your Own Entities Use DPF Operators -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Create your own entities using DPF operators +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can create your field, fields container, or meshed region to use DPF operators with your own data. The ability to use scripting to create any DPF entity means that you are not dependent on result files and can connect the DPF environment @@ -116,11 +116,11 @@ def search_sequence_numpy(arr, seq): ############################################################################### # Create displacement fields over time with three time sets. -# Here the displacement on each node will be the value of its x, y, and +# Here the displacement on each node is the value of its x, y, and # z coordinates for time 1. -# The displacement on each node will be two times the value of its x, y, +# The displacement on each node is two times the value of its x, y, # and z coordinates for time 2. -# The displacement on each node will be three times the value of its x, +# The displacement on each node is three times the value of its x, # y, and z coordinates for time 3. num_nodes = mesh.nodes.n_nodes time1_array = coordinates_data diff --git a/examples/00-basic/04-basic-load-file.py b/examples/00-basic/04-basic-load-file.py index 702ea9dc78..bdfa0fb3da 100644 --- a/examples/00-basic/04-basic-load-file.py +++ b/examples/00-basic/04-basic-load-file.py @@ -1,13 +1,13 @@ """ .. _ref_basic_load_file_example: -Write/Load and Upload/Download a Result File -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Working with a result file +~~~~~~~~~~~~~~~~~~~~~~~~~~ DPF-Core can upload files to and download files from the server machine. This example shows how to write and upload files on the server machine and then -download them back on the client side. The resulting fields container is exported -in CSV format. +download them back on the client side. The resulting fields container is then +exported to a CSV file. """ ############################################################################### @@ -21,7 +21,7 @@ mesh = model.metadata.meshed_region ############################################################################### -# Get and Plot the Fields Container for the Result +# Get and plot the fields container for the result # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Get the fields container for the result and plot it so you can compare it later: @@ -30,7 +30,7 @@ mesh.plot(fc_out) ############################################################################### -# Export Result +# Export result # ~~~~~~~~~~~~~ # Get the fields container for the result and export it in the CSV format: @@ -44,7 +44,7 @@ export_csv_operator.run() ############################################################################### -# Upload CSV Result File +# Upload CSV result file # ~~~~~~~~~~~~~~~~~~~~~~~ # Upload the file ``simple_bar_fc.csv`` on the server side. # Here, :func:`upload_file_in_tmp_folder` is used because @@ -60,7 +60,7 @@ os.remove(file_path) ############################################################################### -# Download CSV Result File +# Download CSV result file # ~~~~~~~~~~~~~~~~~~~~~~~~~ # Download the file ``simple_bar_fc.csv``: @@ -71,7 +71,7 @@ downloaded_client_file_path = file_path ############################################################################### -# Load CSV Result File as Operator Input +# Load CSV result file as operator input # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Load the fields container contained in the CSV file as an operator input: @@ -85,7 +85,7 @@ os.remove(downloaded_client_file_path) ############################################################################### -# Make Operations Over the Imported Fields Container +# Make operations over the imported fields container # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Use this fields container: diff --git a/examples/00-basic/05-use_local_data.py b/examples/00-basic/05-use_local_data.py index af560fb5ed..31f6f40697 100644 --- a/examples/00-basic/05-use_local_data.py +++ b/examples/00-basic/05-use_local_data.py @@ -1,7 +1,7 @@ """ .. _ref_use_local_data_example: -Bring a Field's Data Locally to Improve Performance +Bring a field's data locally to improve performance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Reducing the number of calls to the server is key to improving performance. Using the ``as_local_field`` option brings the data @@ -23,7 +23,7 @@ print(model) ############################################################################### -# Create the Workflow +# Create the workflow # ~~~~~~~~~~~~~~~~~~~~ # Maximum principal stress usually occurs on the skin of the # model. Computing results only on this skin reduces the data size. @@ -39,7 +39,7 @@ skin_mesh.plot() ############################################################################### -# Compute the stress principal inveriants on the skin nodes only: +# Compute the stress principal invariants on the skin nodes only: stress_op = ops.result.stress(data_sources=model.metadata.data_sources) stress_op.inputs.requested_location.connect(dpf.locations.nodal) stress_op.inputs.mesh_scoping.connect(skin_op.outputs.nodes_mesh_scoping) @@ -50,7 +50,7 @@ principal_stress_3 = principal_op.outputs.fields_eig_3()[0] ############################################################################### -# Manipulate Data Locally +# Manipulate data locally # ~~~~~~~~~~~~~~~~~~~~~~~ @@ -79,7 +79,7 @@ f.append(d, id) ############################################################################### -# Plot Result Field +# Plot result field # ~~~~~~~~~~~~~~~~~ @@ -88,7 +88,7 @@ skin_mesh.plot(field_to_keep) ############################################################################### -# Plot Initial Invariants +# Plot initial invariants # ~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/examples/00-basic/06-load_plugin.py b/examples/00-basic/06-load_plugin.py index 1d8fdc674a..b0878125a9 100644 --- a/examples/00-basic/06-load_plugin.py +++ b/examples/00-basic/06-load_plugin.py @@ -1,7 +1,7 @@ """ .. _ref_load_plugin: -Load Plugin +Load plugin ~~~~~~~~~~~ This example shows how to load a plugin that is not loaded automatically. diff --git a/examples/00-basic/07-use_result_helpers.py b/examples/00-basic/07-use_result_helpers.py index 90300ec193..b7a73ed277 100644 --- a/examples/00-basic/07-use_result_helpers.py +++ b/examples/00-basic/07-use_result_helpers.py @@ -1,7 +1,7 @@ """ .. _ref_use_result_helpers: -Use Result Helpers to Load Custom Data +Use result helpers to load custom data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``Result`` class, which is an instance created by the ``Model``, gives access to helpers for requesting results on specific mesh and time scopings. @@ -20,7 +20,7 @@ print(model) ############################################################################### -# Visualize Specific Mode Shapes +# Visualize specific mode shapes # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Choose the modes to visualize: modes = [1, 5, 6] @@ -28,7 +28,7 @@ disp = model.results.displacement.on_time_scoping(modes) ############################################################################### -# Choose a Spatial Subset +# Choose a spatial subset # ~~~~~~~~~~~~~~~~~~~~~~~ # Work on only a named selection (or component). diff --git a/examples/00-basic/09-results_over_space_subset.py b/examples/00-basic/09-results_over_space_subset.py index 30b256f252..d539988435 100644 --- a/examples/00-basic/09-results_over_space_subset.py +++ b/examples/00-basic/09-results_over_space_subset.py @@ -119,8 +119,8 @@ # and can be connected to any result provider to get results split with the # same partition as the input ``ScopingsContainer``. # For example, some application require to get results split by body, by material, -# by element types. It might also be necessary to get results by element shape types -# (shell, solid, beam) to average data properly... +# by element types. It might also be necessary to get results by element shape +# types, such as shell, solid, or beam, to average data properly. # Customers might also require split by entirely custom spatial domains. diff --git a/examples/00-basic/11-server_types.py b/examples/00-basic/11-server_types.py index cdca6b8eee..8e49f4cc83 100644 --- a/examples/00-basic/11-server_types.py +++ b/examples/00-basic/11-server_types.py @@ -1,32 +1,32 @@ """ .. _ref_server_types_example: -Communicate In Process or via gRPC +Communicate in process or via gRPC ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Starting with Ansys 2022R2, pyDPF can communication either In Process or via gRPC -with DPF C++ core server (Ans.Dpf.Grpc.exe). To choose which type of +Starting with Ansys 2022 R2, PyDPF can communication either In Process or via gRPC +with DPF C++ core server (``Ans.Dpf.Grpc.exe``). To choose which type of :class:`ansys.dpf.core.server_types.BaseServer` (object defining the type of communication and the server instance to communicate with) to use, a -:class:`ansys.dpf.core.server_factory.ServerConfig` should be used. +:class:`ansys.dpf.core.server_factory.ServerConfig` class should be used. Until Ansys 2022R1, only gRPC communication using python module ansys.grpc.dpf is supported -(now called :class:`ansys.dpf.core.server_types.LegacyGrpcServer`), starting with Ansys 2022R2, -3 types of servers are supported: +(now called :class:`ansys.dpf.core.server_types.LegacyGrpcServer`), starting with Ansys 2022 R2, +three types of servers are supported: -- :class:`ansys.dpf.core.server_types.InProcessServer` loading DPF in Process. +- :class:`ansys.dpf.core.server_types.InProcessServer` loading DPF in process. -- :class:`ansys.dpf.core.server_types.GrpcServer` using gRPC communication through DPF - gRPC CLayer Ans.Dpf.GrpcClient. +- :class:`ansys.dpf.core.server_types.GrpcServer` using gRPC communication through the DPF + gRPC CLayer ``Ans.Dpf.GrpcClient``. -- :class:`ansys.dpf.core.server_types.LegacyGrpcServer` using gRPC communication through the python - module ansys.grpc.dpf. +- :class:`ansys.dpf.core.server_types.LegacyGrpcServer` using gRPC communication through the + Python module ``ansys.grpc.dpf``. """ from ansys.dpf import core as dpf ############################################################################### -# Start Servers with custom ServerConfig -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Start servers with custom server configuration +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in_process_config = dpf.AvailableServerConfigs.InProcessServer grpc_config = dpf.AvailableServerConfigs.GrpcServer @@ -54,7 +54,7 @@ legacy_grpc_server = dpf.start_local_server(config=legacy_grpc_config, as_global=False) ############################################################################### -# Create Data on different servers +# Create data on different servers # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in_process_field = dpf.fields_factory.create_scalar_field(2, server=in_process_server) @@ -74,8 +74,8 @@ ############################################################################### # Choose default configuration # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# Once a default configuration is chosen, a server of the chosen type is automatically started -# when a DPF object is created: +# Once a default configuration is chosen, a server of the chosen type is +# automatically started when a DPF object is created: initial_config = dpf.SERVER_CONFIGURATION diff --git a/examples/00-basic/12-get_material_properties.py b/examples/00-basic/12-get_material_properties.py index 08da7acb3f..73f8303f4f 100644 --- a/examples/00-basic/12-get_material_properties.py +++ b/examples/00-basic/12-get_material_properties.py @@ -4,8 +4,8 @@ Get material properties from the result file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Material properties are assigned to each element in APDL and by default they -are written out in the APDL result file. This example shows how we can extract -material properties of each element using PyDPF. +are written out in the APDL result file. This example shows how you can extract +material properties of each element using PyDPF-Core. Import necessary modules: """ @@ -14,7 +14,7 @@ from ansys.dpf.core import examples ############################################################################### -# Create a model object to establish a connection with an example result file: +# Create a model object to establish a connection with an example result file. model = dpf.Model(examples.simple_bar) ############################################################################### @@ -25,36 +25,36 @@ ############################################################################### # See available properties in the :class:`meshed_region -# ` +# `. print(mesh.available_property_fields) ############################################################################### -# Get all the material properties +# Get all material properties. mats = mesh.property_field("mat") ############################################################################### # Use the DPF operator :class:`mapdl_material_properties # ` # to extract data for the # materials - `mats`. For the input -# `properties_name`, you need the correct material property string. To see +# ``properties_name``, you need the correct material property string. To see # which strings are supported, you can print the operator help. mat_prop = model.operator("mapdl_material_properties") mat_prop.inputs.materials.connect(mats) ############################################################################### -# For the input pin `properties_name`, you need the correct +# For the input pin ``properties_name``, you need the correct # material property string. To see which strings are supported, you can # print the operator help. print(mat_prop) ############################################################################### -# Let us extract the Young's modulus for element ID 1 +# Extract the Young's modulus for element ID ``1``. mat_prop.inputs.properties_name.connect("EX") mat_field = mat_prop.outputs.properties_value.get_data()[0] print(mat_field.get_entity_data_by_id(1)) ############################################################################### -# Extract Poisson's ratio for element ID 1 +# Extract Poisson's ratio for element ID ``1``. mat_prop.inputs.properties_name.connect("NUXY") mat_field = mat_prop.outputs.properties_value.get_data()[0] print(mat_field.get_entity_data_by_id(1)) diff --git a/examples/00-basic/README.txt b/examples/00-basic/README.txt index ef6885db7e..f27dae1efe 100644 --- a/examples/00-basic/README.txt +++ b/examples/00-basic/README.txt @@ -1,5 +1,5 @@ .. _basic-gallery: -Basic DPF Examples +Basic DPF examples ================== These examples explain the basic concepts of DPF. diff --git a/examples/01-static-transient/00-basic_transient.py b/examples/01-static-transient/00-basic_transient.py index 739e7c3ce3..7f9317a626 100644 --- a/examples/01-static-transient/00-basic_transient.py +++ b/examples/01-static-transient/00-basic_transient.py @@ -1,7 +1,7 @@ """ .. _ref_basic_transient: -Transient Analysis Result Example +Transient analysis result example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to postprocess a transient result and visualize the outputs. @@ -33,7 +33,7 @@ print(tf.time_frequencies.data) ############################################################################### -# Obtain Minimum and Maximum Displacements for All Results +# Obtain minimum and maximum displacements for all results # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Create a displacement operator and set its time scoping request to # the entire time frequency support: @@ -78,7 +78,7 @@ plt.show() ############################################################################### -# Postprocessing Stress +# Postprocessing stress # ~~~~~~~~~~~~~~~~~~~~~ # Create an equivalent (von Mises) stress operator and set its time # scoping to the entire time frequency support: @@ -109,7 +109,7 @@ plt.show() ############################################################################### -# Scoping and Stress Field Coordinates +# Scoping and stress field coordinates # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # The scoping of the stress field can be used to extract the # coordinates used for each result: diff --git a/examples/01-static-transient/01-transient_easy_time_scoping.py b/examples/01-static-transient/01-transient_easy_time_scoping.py index 594da1cc99..531332ab3d 100644 --- a/examples/01-static-transient/01-transient_easy_time_scoping.py +++ b/examples/01-static-transient/01-transient_easy_time_scoping.py @@ -1,10 +1,9 @@ """ .. _ref_transient_easy_time_scoping: -Choose a Time Scoping for a Transient Analysis +Choose a time scoping for a transient analysis ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how to use a model's results to easily -choose a time scoping. +This example shows how to use a model's results to choose a time scoping. """ import matplotlib.pyplot as plt @@ -22,8 +21,8 @@ print(model) ############################################################################### -# Obtain Minimum and Maximum Displacements at All Times -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Obtain minimum and maximum displacements at all times +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Create a displacement operator and set its time scoping request to # the entire time frequency support: disp = model.results.displacement @@ -48,10 +47,10 @@ plt.show() ############################################################################### -# Use Time Extrapolation +# Use time extrapolation # ~~~~~~~~~~~~~~~~~~~~~~~ # A local maximum can be seen on the plot between 0.05 and 0.075 seconds. -# Displacement will be evaluated every 0.0005s in this range +# Displacement is evaluated every 0.0005 seconds in this range # to draw a nicer plot on this range. offset = 0.0005 diff --git a/examples/01-static-transient/README.txt b/examples/01-static-transient/README.txt index b84cfe51c9..73fd476493 100644 --- a/examples/01-static-transient/README.txt +++ b/examples/01-static-transient/README.txt @@ -1,6 +1,6 @@ .. _static_transient_examples: -Transient Analysis Examples +Transient analysis examples =========================== These examples show how to use DPF to extract and plot displacements, stresses, and strains from a transient static analysis. diff --git a/examples/02-modal-harmonic/00-multi_harmonic.py b/examples/02-modal-harmonic/00-multi_harmonic.py index 565899f3e6..331b83c60e 100644 --- a/examples/02-modal-harmonic/00-multi_harmonic.py +++ b/examples/02-modal-harmonic/00-multi_harmonic.py @@ -1,10 +1,10 @@ """ .. _ref_basic_harmonic: -Multi-Harmonic Response Example +Multi-harmonic response example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to compute a multi-harmonic response -using fft transforms. +using ``fft`` transformations. """ import matplotlib.pyplot as pyplot @@ -14,15 +14,15 @@ from ansys.dpf.core import operators as ops ############################################################################### -# Begin by downloading the example harmonic result. This result is +# Begin by downloading the example harmonic result. This result is # not included in the core module by default to speed up the install. # Download should only take a few seconds. # -# Next, create the model and display the state of the result. Note -# that this harmonic result file contains several rpms, -# each rpm has several frequencies. +# Next, create the model and display the state of the result. +# This harmonic result file contains several RPMs, and +# each RPM has several frequencies. -# this file is 66Mb size, it may take time to download +# The size of this file is 66 MB. Downloading it might take some time. harmonic = examples.download_multi_harmonic_result() model = dpf.Model(harmonic) print(model) @@ -33,31 +33,31 @@ print("Number of solution sets", tf.n_sets) ############################################################################### -# Compute multi harmonic response +# Compute multi-harmonic response # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In this example we compute the Rz multi harmonic responses based on -# a selected nodes and a set of EOs (multiple engine orders). +# This example computes the Rz multi-harmonic responses based on +# selected nodes and a set of EOs (engine orders). # Create a total displacement operator and set its time scoping to -# the entire time freq support and its nodes scoping into a user defined nodes. +# the entire time frequency support and its nodes scoping to user-defined nodes. disp_op = ops.result.raw_displacement(data_sources=model) time_ids = list(range(1, model.metadata.time_freq_support.n_sets + 1)) -# define nodal scoping +# Define nodal scoping nodes = dpf.Scoping() nodes.ids = [2, 18] -# connect the frequencies and the nodes scopings to the result -# provider operator +# Connect the frequencies and the nodes scopings to the result +# provider operator. disp_op.inputs.mesh_scoping.connect(nodes) disp_op.inputs.time_scoping.connect(time_ids) -# extract Rz component using the component selector operator +# Extract the Rz component using the component selector operator. comp = dpf.Operator("component_selector_fc") comp.inputs.connect(disp_op.outputs) comp.inputs.component_number.connect(5) -# Compute the multi-harmonic response based on Rz and a set of RPMs +# Compute the multi-harmonic response based on Rz and a set of RPMs. rpms = dpf.Scoping() rpms.ids = [1, 2, 3] @@ -73,7 +73,7 @@ field2 = fields[1] ############################################################################### -# Plot the minimum and maximum displacements over time +# Plot the minimum and maximum displacements over time. pyplot.plot(field1.data, "r", label="Field 1") pyplot.plot(field2.data, "b", label="Field 2") diff --git a/examples/02-modal-harmonic/01-modal_cyclic.py b/examples/02-modal-harmonic/01-modal_cyclic.py index 98b2fb03bd..83e276efa8 100644 --- a/examples/02-modal-harmonic/01-modal_cyclic.py +++ b/examples/02-modal-harmonic/01-modal_cyclic.py @@ -1,7 +1,7 @@ """ .. _ref_basic_cyclic: -Modal Cyclic symmetry Example +Modal cyclic symmetry example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to expand a cyclic mesh and its results. @@ -18,7 +18,7 @@ ############################################################################### # Expand displacement results # ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In this example we expand displacement results, by default on all +# This example expands displacement results, by default on all # nodes and the first time step. # Create displacement cyclic operator diff --git a/examples/02-modal-harmonic/02-cyclic_multi_stage.py b/examples/02-modal-harmonic/02-cyclic_multi_stage.py index d45c985b06..35db1805c8 100644 --- a/examples/02-modal-harmonic/02-cyclic_multi_stage.py +++ b/examples/02-modal-harmonic/02-cyclic_multi_stage.py @@ -1,7 +1,7 @@ """ .. _ref_multi_stage_cyclic: -Multi-stage Cyclic Symmetry Example +Multi-stage cyclic symmetry example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to expand the mesh and results from a multi-stage cyclic analysis. @@ -19,7 +19,7 @@ ############################################################################### # Expand displacement results # ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In this example we expand displacement results, by default on all +# This example expands displacement results, by default on all # nodes and the first time step. # Create displacement cyclic operator diff --git a/examples/02-modal-harmonic/04-modal_superposition.py b/examples/02-modal-harmonic/04-modal_superposition.py index 11463d341d..2dca72dcfd 100644 --- a/examples/02-modal-harmonic/04-modal_superposition.py +++ b/examples/02-modal-harmonic/04-modal_superposition.py @@ -1,8 +1,8 @@ """ .. _ref_msup: -Expand Harmonic Modal Superposition with DPF -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Expand harmonic modal superposition with DPF +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Different types of linear dynamics expansions are implemented in DPF. With modal superposition used in harmonic analysis, modal coefficients are multiplied by mode shapes (of a previous modal analysis) to analyse @@ -15,11 +15,11 @@ from ansys.dpf.core import examples ############################################################################### -# Create the data sources -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# First create a data sources with the mode shapes and the modal response -# The expansion is recursive in dpf: first the modal response is read, -# then, "upstreams" mode shapes are found in the data sources, so they +# Create data sources +# ~~~~~~~~~~~~~~~~~~~ +# Create data sources with the mode shapes and the modal response. +# The expansion is recursive in DPF: first the modal response is read. +# Then, "upstream" mode shapes are found in the data sources, where they # are read and expanded (mode shapes x modal response) msup_files = examples.download_msup_files_to_dict() @@ -31,10 +31,10 @@ ############################################################################### # Compute displacements -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# Once the recursivity is put in the data sources (with add_upstream) -# computing displacements with or without expansion, in harmonic, transient -# or modal analysis has the exact same syntax +# ~~~~~~~~~~~~~~~~~~~~~ +# Once the ``add_upstream()`` method puts the recursivity in the data sources, +# in a harmonic, transient, or modal analysis, computing displacements with +# or without expansion has the exact same syntax. model = dpf.Model(data_sources) disp = model.results.displacement.on_all_time_freqs.eval() diff --git a/examples/02-modal-harmonic/05-read_distributed_files.py b/examples/02-modal-harmonic/05-read_distributed_files.py index 52e2316e72..43f41abc85 100644 --- a/examples/02-modal-harmonic/05-read_distributed_files.py +++ b/examples/02-modal-harmonic/05-read_distributed_files.py @@ -3,11 +3,11 @@ Read results from distributed files ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Solvers usually solve analysis with distributed architecture. In that -case one file is written by spatial or temporal domains. The capability of -reading one result in distributed files has been implemented in DPF. This -allows to skip the merging of files solver side which is time consuming and often -duplicates the memory used. +Solvers usually solve analysis with distributed architecture. In this +case, one file is written by spatial or temporal domains. DPF is capable +of reading one result in distributed files. This allows it to skip the +merging of files on the solver side, which is time-consuming and +often doubles the memory used. """ from ansys.dpf import core as dpf @@ -15,7 +15,7 @@ ############################################################################### # Create the data sources -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~~~~~~~~~ # First create a data sources with one result file by domain distributed_file_path = examples.download_distributed_files() @@ -25,12 +25,12 @@ ############################################################################### # Compute displacements -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~~~~~~~ # Once the file architecture is put in the data sources, # computing displacements with or without domain has the exact same syntax. -# DPF reads parts of the result on each domain and remerge the results in -# the outputs fields. The output will have no difference when using combined -# or distributed files +# DPF reads parts of the result on each domain and merges these results in +# the outputs fields. The output is no different than when using combined +# or distributed files. model = dpf.Model(data_sources) disp = model.results.displacement.on_all_time_freqs.eval() @@ -40,8 +40,8 @@ model.metadata.meshed_region.plot(disp.get_field_by_time_complex_ids(freq_set, 0)) ############################################################################### -# Compute stress eqv -# ~~~~~~~~~~~~~~~~~~~ +# Compute equivalent stress +# ~~~~~~~~~~~~~~~~~~~~~~~~~ stress_res = model.results.stress stress_res.on_location(dpf.locations.nodal) stress = stress_res.on_all_time_freqs.eval() diff --git a/examples/02-modal-harmonic/README.txt b/examples/02-modal-harmonic/README.txt index 877464ca12..a1c27c4963 100644 --- a/examples/02-modal-harmonic/README.txt +++ b/examples/02-modal-harmonic/README.txt @@ -1,6 +1,6 @@ .. _modal_harmonic_examples: -Harmonic Analysis Examples +Harmonic analysis examples =========================== These examples show how to use DPF to extract and manipulate, results from harmonic or modal analyses. diff --git a/examples/03-advanced/00-multistage_advanced_options.py b/examples/03-advanced/00-multistage_advanced_options.py index a545782b53..6213c8a85b 100644 --- a/examples/03-advanced/00-multistage_advanced_options.py +++ b/examples/03-advanced/00-multistage_advanced_options.py @@ -1,11 +1,11 @@ """ .. _ref_multi_stage_cyclic_advanced: -Multi-stage Cyclic Symmetry Use Advanced Customization -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how to expand on selected sectors the mesh and results from a -multi-stage cyclic analysis. -It also shows how to use the cyclic support for advanced post processing +Multi-stage cyclic symmetry using advanced customization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This example shows how to expand on selected sectors the mesh and results +from a multi-stage cyclic analysis. It also shows how to use the cyclic support +for advanced postprocessing """ from ansys.dpf import core as dpf from ansys.dpf.core import examples @@ -18,7 +18,7 @@ print(model) ############################################################################### -# Check the result info to verify that it's a multistage model +# Check the result info to verify that it's a multi-stage model result_info = model.metadata.result_info print(result_info.has_cyclic) print(result_info.cyclic_symmetry_type) @@ -37,7 +37,7 @@ ############################################################################### # Expand displacement results # ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In this example we expand displacement results, on chosen sectors +# This example expands displacement results on chosen sectors. # Create displacement cyclic operator @@ -63,7 +63,7 @@ mesh = mesh_provider.outputs.mesh() ############################################################################### -# plot the expanded result on the expanded mesh +# Plot the expanded result on the expanded mesh # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mesh.plot(fields) @@ -82,21 +82,21 @@ ############################################################################### # Check results precisely -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~~~~~~~~~ -# print the time_freq_support to see the harmonic index +# Print the time_freq_support to see the harmonic index print(model.metadata.time_freq_support) print(model.metadata.time_freq_support.get_harmonic_indices(stage_num=1).data) -# harmonic index 0 means that the results are symmetric sectors by sector +# Harmonic index 0 means that the results are symmetric sectors by sector # taking a node in the base sector of the first stage node_id = cyc_support.base_nodes_scoping(0)[18] print(node_id) -# check what are the expanded ids of this node +# Check what are the expanded ids of this node expanded_ids = cyc_support.expand_node_id(node_id, [0, 1, 2], 0) print(expanded_ids.ids) -# verify that the displacement values are the same on all those nodes +# Verify that the displacement values are the same on all those nodes for node in expanded_ids.ids: print(fields[0].get_entity_data_by_id(node)) diff --git a/examples/03-advanced/01-solve_harmonic_problem.py b/examples/03-advanced/01-solve_harmonic_problem.py index 90a6f3fbd1..f37d78035f 100644 --- a/examples/03-advanced/01-solve_harmonic_problem.py +++ b/examples/03-advanced/01-solve_harmonic_problem.py @@ -1,11 +1,11 @@ """ .. _ref_solve_modal_problem_advanced: -Solve Harmonic Problem (with damping) Using Matrix Inverse +Solve harmonic problem (with damping) using matrix inverse ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to create an harmonic (over frequencies) fields container for an analysis with damping. This fields container is then used to -solve the problem Ma+Dv+Ku =F by inversing the matrix +solve the problem Ma+Dv+Ku=F by inverting the matrix """ import math @@ -14,7 +14,7 @@ from ansys.dpf.core import operators as ops ############################################################################### -# Create 2D (x,y) matrix fields for inertia, damping and stiffness +# Create 2D (x,y) matrix fields for inertia, damping, and stiffness. freq = [25, 50, 100, 200, 400] dim = 2 # dimension of matrix @@ -28,7 +28,7 @@ ############################################################################### # Create a fields container for real and imaginary parts -# for each frequency +# for each frequency. reals = {} ims = {} @@ -45,8 +45,8 @@ ) ############################################################################### -# Use dpf's operators to inverse the matrix, compute the amplitude -# and the phase +# Use DPF operators to inverse the matrix and then compute the amplitude +# and the phase. inverse = ops.math.matrix_inverse(cplx_fc) component = ops.logic.component_selector_fc(inverse, 0) @@ -54,7 +54,7 @@ phase = ops.math.phase_fc(component) ############################################################################### -# Get the phase and amplitude and plot it over frequencies +# Get the phase and amplitude and then plot it over frequencies. amp_over_frequency = amp.outputs.fields_container() phase_over_frequency = phase.outputs.fields_container() time_freq_support = amp_over_frequency.time_freq_support diff --git a/examples/03-advanced/02-volume_averaged_stress.py b/examples/03-advanced/02-volume_averaged_stress.py index 977e3d55b6..c8c609f908 100644 --- a/examples/03-advanced/02-volume_averaged_stress.py +++ b/examples/03-advanced/02-volume_averaged_stress.py @@ -1,22 +1,22 @@ """ .. _ref_volume_averaged_stress_advanced: -Average Elemental Stress on a given volume -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Average elemental stress on a given volume +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to find the minimum list of surrounding elements for a given node to get a minimum volume. -For each list of elements, the elemental stress eqv are multiplied by the +For each list of elements, the elemental stress equivalent is multiplied by the volume of each element. This result is then accumulated to divide it by the -total volume +total volume. """ from ansys.dpf import core as dpf from ansys.dpf.core import examples from ansys.dpf.core import operators as ops ############################################################################### -# Create a model targeting a given result file. -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# The model will give an easy access to the mesh, time_freq_support ... +# Create a model targeting a given result file +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# The model provides easy access to the mesh and time frequency support. model = dpf.Model(examples.complex_rst) mesh = model.metadata.meshed_region @@ -24,8 +24,8 @@ # Volume size to check volume_check = 4.0e-11 -# get the all the node ids in the model to find the minimum amount of -# surrounding elements to get a minimum volume +# Get all node IDs in the model to find the minimum amount of +# surrounding elements to get a minimum volume. nodes = mesh.nodes.scoping nodes_ids = nodes.ids nodes_ids_to_compute = [] @@ -36,14 +36,14 @@ ############################################################################### # Read the volume by element -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~ vol_op = ops.result.elemental_volume() vol_op.inputs.streams_container(model.metadata.streams_provider) vol_field = vol_op.outputs.fields_container()[0] ############################################################################### # Find the minimum list of elements by node to get the volume check -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # get the connectivy and inverse connecitivity fields connectivity_field = mesh.elements.connectivities_field @@ -90,10 +90,12 @@ # Create workflow # ~~~~~~~~~~~~~~~~ # For each list of elements surrounding nodes: -# compute stress eqv averaged on elements -# apply dot product seqv.volume -# sum up those on the list of elements -# divide this sum by the total volume on those elements +# +# - Compute equivalent stress averaged on elements. +# - Apply dot product seqv.volume. +# - Sum up those on the list of elements. +# - Divide this sum by the total volume on these elements. +# s = model.results.stress() to_elemental = ops.averaging.to_elemental_fc(s) @@ -129,13 +131,13 @@ divide.run() ############################################################################### -# Plot elemental seqv and volume averaged elemental seqv -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Plot equivalent elemental stress and volume averaged elemental equivalent stress +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mesh.plot(values_to_sum_field) mesh.plot(divide.outputs.field()) ############################################################################### -# Use the Operator instead +# Use the operator instead # ~~~~~~~~~~~~~~~~~~~~~~~~~ # An operator with the same algorithm has been implemented s_fc = s.outputs.fields_container() diff --git a/examples/03-advanced/03-exchange_data_between_servers.py b/examples/03-advanced/03-exchange_data_between_servers.py index bd9cfbb7ca..0cb9ce4335 100644 --- a/examples/03-advanced/03-exchange_data_between_servers.py +++ b/examples/03-advanced/03-exchange_data_between_servers.py @@ -1,12 +1,12 @@ """ .. _ref_exchange_data_between_servers.: -Exchange Data Between Servers -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In this example, 2 dpf's servers will be started and a workflow will be -created with a part on both servers. This example opens the possibility for a -user to read data from a given machine and transform this data on another -without any more difficulties than working on a local computer +Exchange data between servers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +In this example, two DPF servers are started, and a workflow is created +with a part on both servers. This example shows how you can read data +from a given machine and transform this data on another machine +without any more difficulties than working on a local computer. """ from ansys.dpf import core as dpf @@ -14,33 +14,33 @@ from ansys.dpf.core import operators as ops ############################################################################### -# Create 2 servers -# ~~~~~~~~~~~~~~~~~ -# Here the 2 servers are started on the local machine with start_local_server -# but, if the user has another server, he can connect on any dpf's server on -# the network via: connect_to_server - -# the as_global attributes allows to choose whether a server will be stored -# by the module and used by default -# Here, we choose the 1st server to be the default +# Create two servers +# ~~~~~~~~~~~~~~~~~~ +# Use the ``start_local_server()`` method to start two servers on your local +# machine. If you have another server, you can use the ``connect_to_server()`` +# method to connect to any DPF server on your network. + +# The ``as_global`` attributes allows you to choose whether a server is stored +# by the module and used by default. This example sets the first server as the default. server1 = dpf.start_local_server(as_global=True, config=dpf.AvailableServerConfigs.GrpcServer) server2 = dpf.start_local_server(as_global=False, config=dpf.AvailableServerConfigs.GrpcServer) -# Check that the 2 servers are on different ports +# Check that the two servers are listening on different ports. print(server1.port if hasattr(server1, "port") else "", server2.port if hasattr(server2, "port") else "") ############################################################################### # Send the result file -# ~~~~~~~~~~~~~~~~~~~~~ -# Here, the result file is sent in a temporary dir of the first server -# This file upload is useless in our case, since the 2 servers are locals +# ~~~~~~~~~~~~~~~~~~~~ +# The result file is sent to the temporary directory of the first server. +# This file upload is useless in this case because the two servers are local +# machines. file = examples.complex_rst file_path_in_tmp = dpf.upload_file_in_tmp_folder(file) ############################################################################### # Create a workflow on the first server -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Create the model model = dpf.Model(file_path_in_tmp) @@ -50,18 +50,18 @@ ############################################################################### # Create a workflow on the second server -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# Change the cartesian coordinates to cylindrical coordinates cs +# Change the Cartesian coordinates to cylindrical coordinates cs coordinates = ops.geo.rotate_in_cylindrical_cs_fc(server=server2) -# Create the cartesian coordinate cs +# Create the Cartesian coordinate cs cs = dpf.fields_factory.create_scalar_field(12, server=server2) cs.data = [1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0] coordinates.inputs.coordinate_system(cs) -# choose the radial component to plot +# Choose the radial component to plot comp = dpf.operators.logic.component_selector_fc(coordinates, 0, server=server2) ############################################################################### @@ -79,7 +79,7 @@ ############################################################################### # Plot the output -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# ~~~~~~~~~~~~~~~ out = comp.outputs.fields_container() # real part diff --git a/examples/03-advanced/04-extrapolation_stress_3d.py b/examples/03-advanced/04-extrapolation_stress_3d.py index 8b9bc42edc..036e7dc010 100644 --- a/examples/03-advanced/04-extrapolation_stress_3d.py +++ b/examples/03-advanced/04-extrapolation_stress_3d.py @@ -1,13 +1,13 @@ """ .. _extrapolation_test_stress_3Delement: -Extrapolation Method for stress result of 3D-element +Extrapolation method for stress result of a 3D element ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to compute the nodal components stress from -Gaussian points (integration points) for 3D-element by using the method +Gaussian points (integration points) for a 3D element by using of extrapolation. -Extrapolating results available at Gauss or quadrature points to nodal +Extrapolate results available at Gaussian or quadrature points to nodal points for a field or fields container. The available elements are: * Linear quadrangle @@ -17,16 +17,16 @@ * Linear tetrahedral * Quadratic tetrahedral -1st step : Get the data source's solution from the integration points (this -result file was generated with the MAPDL option ``EREXS, NO``). +Here are the steps for extrapolation: -2nd step: Use operator of extrapolation to compute the nodal stress. - -3rd step: Get nodal stress result from data source's analysis reference. -The analysis was computed by Ansys Mechanical APDL. - -4th step: Compare the results between nodal stress from data source -reference and nodal stress computed by the extrapolation method. +#. Get the data source's solution from the integration points. (This + result file was generated with the Ansys Mechanical APDL (MAPDL) + option ``EREXS, NO``). +#. Use the extrapolation operator to compute the nodal stress. +#. Get the result for nodal stress from the data source. + The analysis was computed by MAPDL. +#. Compare the result for nodal stress from the data source + and the nodal stress computed by the extrapolation method. """ @@ -34,32 +34,32 @@ from ansys.dpf.core import examples ############################################################################### -# Get the data source's analyse of integration points and data source's analyse reference +# Get the data source's analysis of integration points and analysis reference datafile = examples.download_extrapolation_3d_result() -# integration points (Gaussian points) +# Get integration points (Gaussian points) data_integration_points = datafile["file_integrated"] data_sources_integration_points = dpf.DataSources(data_integration_points) -# reference +# Get the reference dataSourceref = datafile["file_ref"] data_sources_ref = dpf.DataSources(dataSourceref) -# get the mesh +# Get the mesh model = dpf.Model(data_integration_points) mesh = model.metadata.meshed_region -# operator instantiation scoping +# Operator instantiation scoping op_scoping = dpf.operators.scoping.split_on_property_type() # operator instantiation op_scoping.inputs.mesh.connect(mesh) op_scoping.inputs.requested_location.connect("Elemental") mesh_scoping = op_scoping.outputs.mesh_scoping() ############################################################################### -# Extrapolation from integration points for stress result -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In this example we compute nodal component stress result from -# integration points stress by using the ``gauss_to_node_fc`` operator. +# Extrapolate from integration points for stress result +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# This example uses the ``gauss_to_node_fc`` operator to compute the nodal +# component stress result from the stress result of integration points. # Create stress operator to get stress result of integration points stressop = dpf.operators.result.stress() @@ -67,17 +67,17 @@ stress = stressop.outputs.fields_container() ############################################################################### -# Nodal stress result of integration points: +# Nodal stress result of integration points ############################################################################### -# The command ``ERESX,NO`` in Mechanical APDL is used to copy directly the -# gaussian (integration) points results to the nodes, instead of the +# The MAPLD command ``ERESX,NO``is used to copy directly the +# Gaussian (integration) points results to the nodes, instead of the # results at nodes or elements (which are interpolation of results at a # few gauss points). -# The following plot shows the nodal values which are the averaged values +# The following plot shows the nodal values, which are the averaged values # of stresses at each node. The value shown at the node is the average of -# the stresses from the gaussian points of each element that it belongs to. +# the stresses from the Gaussian points of each element that it belongs to. -# plot +# Plot stress_nodal_op = dpf.operators.averaging.elemental_nodal_to_nodal_fc() stress_nodal_op.inputs.fields_container.connect(stress) mesh.plot(stress_nodal_op.outputs.fields_container()) @@ -95,7 +95,7 @@ fex = ex_stress.outputs.fields_container() ############################################################################### -# Stress result of reference ANSYS Workbench +# Stress result of reference Ansys Workbench # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Stress from file dataSourceref @@ -107,7 +107,7 @@ ############################################################################### # Plot # ~~~~~~~~~~ -# Showing plots of Extrapolation's stress result and reference's stress result +# Show plots of the extrapolation's stress result and the reference's stress result # extrapolation fex_nodal_op = dpf.operators.averaging.elemental_nodal_to_nodal_fc() @@ -119,13 +119,13 @@ mesh.plot(stress_ref_nodal_op.outputs.fields_container()) ############################################################################### -# Comparison -# ~~~~~~~~~~~~ -# Compare the stress result computed by extrapolation and reference's result. -# Check if two fields container are identical. -# Maximum tolerance gap between to compared values: 1e-2. -# Smallest value which will be considered during the comparison -# step : all the ``abs(values)`` in field less than 1e-8 is considered as null +# Compare stress results +# ~~~~~~~~~~~~~~~~~~~~~~ +# Compare the stress result computed by extrapolation and the reference's result. +# Check if the two fields container are identical. +# The maximum tolerance gap between two compared values is 1e-2. +# The smallest value that is considered during the comparison step: all the +# ``abs(values)`` in field less than 1e-8 is considered as null. # operator AreFieldsIdentical_fc op = dpf.operators.logic.identical_fc() diff --git a/examples/03-advanced/05-extrapolation_strain_2d.py b/examples/03-advanced/05-extrapolation_strain_2d.py index 09c9aa5287..71fddf10ec 100644 --- a/examples/03-advanced/05-extrapolation_strain_2d.py +++ b/examples/03-advanced/05-extrapolation_strain_2d.py @@ -1,13 +1,13 @@ """ .. _extrapolation_test_strain_2Delement: -Extrapolation Method for strain result of 2D-element -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Extrapolation method for strain result of a 2D element +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to compute the nodal component elastic strain -from Gaussian points (integration points) for 2D-element by using the -method of extrapolation. +from Gaussian points (integration points) for a 2D element by using the +extrapolation method. -Extrapolating results available at Gauss or quadrature points to nodal +Extrapolate results available at Gaussian or quadrature points to nodal points for a field or fields container. The available elements are: * Linear quadrangle @@ -17,16 +17,17 @@ * Linear tetrahedral * Quadratic tetrahedral -1st step : Get the values at the data source's of integration points (this result -file was generated from MAPDL with ``EREXS, NO``). +Here are the steps for extrapolation: -2nd step: using operator of extrapolation to compute the nodal elastic strain. +#. Get the data source's solution from the integration points. (This + result file was generated with the Ansys Mechanical APDL (MAPDL) + option ``EREXS, NO``). +#. Use the extrapolation operator to compute the nodal elastic strain. +#. Get the result for nodal elastic strain from the data source. + The analysis was computed by MAPDL. +#. Compare the result for nodal elastic strain from the data source + and the nodal elastic strain computed by the extrapolation method. -3rd step: Get the nodal elastic strain result from the data source. -The analysis was computed by Ansys Mechanical APDL. - -4th step: Compare the results between nodal elastic strain from the data -source and nodal strain computed by extrapolation method. """ @@ -50,11 +51,10 @@ mesh = model.metadata.meshed_region ############################################################################### -# Extrapolation from integration points for elastic strain result -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In this example we compute nodal component elastic strain results from -# the elastic strain at the integration points by using the ``gauss_to_node_fc`` -# operator. +# Extrapolate from integration points for elastic strain result +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# This example uses the ``gauss_to_node_fc``operator to compute nodal component +# elastic strain results from the elastic strain at the integration points. # Create elastic strain operator to get strain result of integration points strainop = dpf.operators.result.elastic_strain() @@ -64,12 +64,14 @@ ############################################################################### # Nodal elastic strain result of integration points: ############################################################################### -# The command ``ERESX,NO`` in Mechanical APDL is used to copy directly the -# gaussian (integration) points results to the nodes, instead of the results -# at nodes or elements (which are interpolation of results at a few gauss points). -# The following plot shows the nodal values which are the averaged values +# The command ``ERESX,NO`` in MAPDL is used to copy directly the +# Gaussian (integration) points results to the nodes, instead of the results +# at nodes or elements (which are an interpolation of results at a few +# Gaussian points). +# +# The following plot shows the nodal values that are the averaged values # of elastic strain at each node. The value shown at the node is the -# average of the elastic strains from the gaussian points of each element +# average of the elastic strains from the Gaussian points of each element # that it belongs to. # plot @@ -90,8 +92,8 @@ fex = ex_strain.outputs.fields_container() ############################################################################### -# Elastic strain result of reference ANSYS Workbench -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Elastic strain result of reference Ansys Workbench +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Strain from file dataSourceref strainop_ref = dpf.operators.result.elastic_strain() @@ -100,8 +102,8 @@ ############################################################################### # Plot -# ~~~~~~~~~~ -# Showing plots of Extrapolation's elastic strain result and reference's elastic strain result +# ~~~~ +# Show plots of extrapolation's elastic strain result and reference's elastic strain result # extrapolation fex_nodal_op = dpf.operators.averaging.elemental_nodal_to_nodal_fc() @@ -116,10 +118,10 @@ # Comparison # ~~~~~~~~~~~~ # Compare the elastic strain result computed by extrapolation and reference's result. -# Check if two fields container are identical. -# Maximum tolerance gap between to compared values: 1e-3. -# Smallest value which will be considered during the comparison -# step : all the ``abs(values)`` in the field less than 1e-14 are considered null +# Check if the two fields containers are identical. +# The maximum tolerance gap between two compared values is 1e-3. +# The smallest value that is to be considered during the comparison +# step : all the ``abs(values)`` in the field less than 1e-14 are considered null. # operator AreFieldsIdentical_fc op = dpf.operators.logic.identical_fc() diff --git a/examples/03-advanced/06-stress_gradient_path.py b/examples/03-advanced/06-stress_gradient_path.py index 3b53eb950f..879663e533 100644 --- a/examples/03-advanced/06-stress_gradient_path.py +++ b/examples/03-advanced/06-stress_gradient_path.py @@ -1,18 +1,18 @@ """ .. _stress_gradient_path: -Stress gradient normal to a defined node. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Stress gradient normal to a defined node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to plot a stress gradient normal to a selected node. -As the example is based on creating a path along the normal, the selected node +Because the example is based on creating a path along the normal, the selected node must be on the surface of the geometry. A path is created of a defined length. """ ############################################################################### -# First, import the DPF-Core module as ``dpf`` and import the -# included examples file and ``DpfPlotter`` +# Import the DPF-Core module as ``dpf`` and import the +# included examples file and ``DpfPlotter``. # import matplotlib.pyplot as plt from ansys.dpf import core as dpf @@ -21,7 +21,7 @@ from ansys.dpf.core import examples ############################################################################### -# Next, open an example and print out the ``model`` object. The +# Open an example and print out the ``Model`` object. The # :class:`Model ` class helps to organize access # methods for the result by keeping track of the operators and data sources # used by the result file. @@ -38,16 +38,16 @@ model = dpf.Model(path) print(model) ############################################################################### -# Define the `node_id` normal to which a stress gradient should be plotted. +# Define the node ID normal to plot the a stress gradient to. # node_id = 1928 ############################################################################### -# The following command prints the mesh unit +# Print the mesh unit # unit = model.metadata.meshed_region.unit print("Unit: %s" % unit) ############################################################################### -# `depth` defines the path length / depth to which the path will penetrate. +# `depth` defines the length/depth that the path penetrates to. # While defining `depth` make sure you use the correct mesh unit. # `delta` defines distance between consecutive points on the path. depth = 10 # in mm @@ -80,7 +80,7 @@ normal.inputs.mesh_scoping.connect(nodal_scoping) normal_vec_out_field = normal.outputs.field.get_data() ############################################################################### -# Normal vector is along the surface normal. We need to invert the vector +# The normal vector is along the surface normal. You need to invert the vector # using `math.scale` operator inwards in the geometry, to get the path # direction. # @@ -88,7 +88,7 @@ ponderation=-1.0) normal_vec_in = normal_vec_in_field.outputs.field.get_data().data[0] ############################################################################### -# Get Nodal coordinates, they serve as the first point on the line. +# Get nodal coordinates, they serve as the first point on the line. # node = mesh.nodes.node_by_id(node_id) line_fp = node.coordinates @@ -105,13 +105,13 @@ range(int(depth / delta))] flat_coordinates = [entry for data in coordinates for entry in data] ############################################################################### -# Create Field for coordinates of the path. +# Create field for coordinates of the path. # field_coord = dpf.fields_factory.create_3d_vector_field(len(coordinates)) field_coord.data = flat_coordinates field_coord.scoping.ids = list(range(1, len(coordinates) + 1)) ############################################################################### -# Let's now map results on the path. +# Map results on the path. mapping_operator = ops.mapping.on_coordinates( fields_container=stress_fc, coordinates=field_coord, @@ -119,7 +119,7 @@ mesh=mesh) fields_mapped = mapping_operator.outputs.fields_container() ############################################################################### -# Here, we request the mapped field data and its mesh +# Request the mapped field data and its mesh. field_m = fields_mapped[0] mesh_m = field_m.meshed_region ############################################################################### @@ -132,9 +132,9 @@ plt.ylabel("Stress (%s)" % field_m.unit) plt.show() ############################################################################### -# To create a plot we need to add both the meshes -# `mesh_m` - mapped mesh -# `mesh` - original mesh +# Create a plot to add both meshes to. +# ``mesh_m`` - mapped mesh +# ``mesh`` - original mesh pl = DpfPlotter() pl.add_field(field_m, mesh_m) pl.add_mesh(mesh, style="surface", show_edges=True, diff --git a/examples/03-advanced/10-asme_secviii_divtwo.py b/examples/03-advanced/10-asme_secviii_divtwo.py index aae430d652..c65baf4229 100644 --- a/examples/03-advanced/10-asme_secviii_divtwo.py +++ b/examples/03-advanced/10-asme_secviii_divtwo.py @@ -1,13 +1,11 @@ """ .. _ref_ASME_SecVIII_Div2: -ASME Section VIII Division 2: pressure vessels ----------------------------------------------- -This example demonstrates how PyDPF might be used to postprocess a Mechanical -model according to an international standard. - -The standard chosen for this example is the well-known ASME Section VIII Division -2 used for pressure vessels design. +Pressure vessel analysis according to an ASME standard +------------------------------------------------------ +This example demonstrates how you can use PyDPF to postprocess a Mechanical +model according to the ASME Section VIII Division 2 standard for pressure +vessel designs. This example is taken from Workshop 02.1 from Ansys Mechanical Advanced Topics. Instead of using several user defined results as it is done in the workshop, @@ -18,9 +16,9 @@ that calculation is made according to latest ASME standard. """ -# Here we import rst file from Workshop 02.1 -# Since it is a elastic-plastic analysis, there are several substeps. We focus -# on the latest substep (number 4) +# Import the result file from Workshop 02.1. +# Because it is a elastic-plastic analysis, there are several substeps. The focus +# here is on the latest substep (number 4) import ansys.dpf.core as dpf from ansys.dpf.core import examples @@ -37,25 +35,29 @@ ############################################################################### # Parameters input # ~~~~~~~~~~~~~~~~ -# User must go to ASME Section III Division 2 and get parameters alfasl & m2 -# Below the code if user is going to introduce these parameters manually -# alfasl = input("Please introduce alfasl parameter from ASME\n") -# alfasl = float(alfasl) -# m2 = input("Please introduce m2 parameter from ASME\n") -# m2 = float(m2) -# Values for this exercise: alfasl = 2.2 & m2 = .288, same as original +# You must go to ASME Section III Division 2 to get values for the parameters +# ``alfasl`` and ``m2``. This is the code for introducing these parameters +# manually: +# +# - ``alfasl`` = input("Introduce ``alfasl`` parameter from ASME\n") +# - ``alfasl`` = float(alfasl) +# -``m2`` = input("Introduce ``m2`` parameter from ASME\n") +# - ``m2`` = float(m2) +# +# For this exercise, ``alfasl`` = 2.2 and ``m2`` = .288, which is the same +# as the original. # alfasl = 2.2 m2 = .288 ############################################################################### -# Stresses & strains -# ~~~~~~~~~~~~~~~~~~ -# Stresses and strains are read. For getting same results as Mechanical, we read -# Elemental Nodal strains and apply Von Mises invariant. Currently this operator -# does not have the option to define effective Poisson's ratio. Due to this, -# a correction factor is applied. +# Stresses and strains +# ~~~~~~~~~~~~~~~~~~~~ +# Stresses and strains are read. To get the same results as Mechanical, read +# elemental nodal strains and apply von Mises invariant. This operator +# does not have an option for defining the effective Poisson's ratio. +# Consequently, a correction factor is applied. seqv_op = dpf.operators.result.stress_von_mises(time_scoping = timeScoping, data_sources = dataSource, diff --git a/examples/03-advanced/README.txt b/examples/03-advanced/README.txt index ab62a5fa99..2fb814604c 100644 --- a/examples/03-advanced/README.txt +++ b/examples/03-advanced/README.txt @@ -1,5 +1,5 @@ .. _advanced_examples: -Advanced and Miscellaneous Examples +Advanced and miscellaneous examples =================================== These demos show advanced use cases demonstrating high level of workflow customization diff --git a/examples/04-specific-requests/00-hdf5_double_float_comparison.py b/examples/04-specific-requests/00-hdf5_double_float_comparison.py index 549db75683..6fb6939d91 100644 --- a/examples/04-specific-requests/00-hdf5_double_float_comparison.py +++ b/examples/04-specific-requests/00-hdf5_double_float_comparison.py @@ -1,15 +1,16 @@ """ .. _ref_basic_hdf5: -Hdf5 export and compare precision +HDF5 export and compare precision ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how to use hdf5 format to export and -make a comparison between simple/double precision. +This example shows how to use HDF5 format to export and +compare simple precision versus double precision. """ ############################################################################### -# Import dpf module and its examples files, and create a temporary directory +# Import the ``dpf-core`` module and its examples files, and then create a +# temporary directory. import os import tempfile @@ -21,7 +22,7 @@ tmpdir = tempfile.mkdtemp() ############################################################################### -# Create the model and get stresses, displacements and mesh. +# Create the model and get the stresses, displacements, and mesh. transient = examples.download_transient_result() model = dpf.Model(transient) @@ -31,7 +32,7 @@ mesh = model.metadata.meshed_region ############################################################################### -# Create the hdf5 export operator. Hdf5 module should already be loaded. +# Create the HDF5 export operator. The HDF5 module should already be loaded. h5op = ops.serialization.serialize_to_hdf5() print(h5op) @@ -47,14 +48,14 @@ displacement.inputs.time_scoping.connect(timeIds) ############################################################################### -# Connect inputs of the hdf5 export operator. +# Connect inputs of the HDF5 export operator. h5op.inputs.data1.connect(stress.outputs) h5op.inputs.data2.connect(displacement.outputs) h5op.inputs.data3.connect(mesh) ############################################################################### -# Export with simple precision +# Export with simple precision. directory = "c:/temp/" if os.name == "posix": @@ -64,14 +65,14 @@ h5op.run() ############################################################################### -# Export with simple precision +# Export with double precision. h5op.inputs.export_floats.connect(False) h5op.inputs.file_path.connect(os.path.join(tmpdir, directory, "dpf_double.h5")) h5op.run() ############################################################################### -# Comparison +# Compare simple precision versus double precision. float_precision = os.stat(os.path.join(tmpdir, directory, "dpf_float.h5")).st_size double_precision = os.stat(os.path.join(tmpdir, directory, "dpf_double.h5")).st_size print( diff --git a/examples/04-specific-requests/01-reduced_matrices_export.py b/examples/04-specific-requests/01-reduced_matrices_export.py index 14bf5f550c..063b8e40b9 100644 --- a/examples/04-specific-requests/01-reduced_matrices_export.py +++ b/examples/04-specific-requests/01-reduced_matrices_export.py @@ -4,12 +4,13 @@ Get reduced matrices and make export ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to get reduced matrices and -export them to hdf5 and csv format. +export them to HDF5 and CSV files. """ ############################################################################### -# Import dpf module and its examples files, and create a temporary directory +# Import the ``dpf-core`` module and its examples files, and then create a +# temporary directory. import os import tempfile @@ -21,7 +22,7 @@ tmpdir = tempfile.mkdtemp() ############################################################################### -# Create the operator and connect dataSources +# Create the operator and connect data sources. ds = dpf.DataSources(examples.download_sub_file()) @@ -29,7 +30,7 @@ matrices_provider.inputs.data_sources.connect(ds) ############################################################################### -# Get result fields container that contains the reduced matrices +# Get result fields container that contains the reduced matrices. fields = matrices_provider.outputs.fields_container() @@ -38,7 +39,7 @@ fields[0].data ############################################################################### -# Export the result fields container in hdf5 format +# Export the result fields container to an HDF5 file. h5_op = ops.serialization.serialize_to_hdf5() h5_op.inputs.data1.connect(matrices_provider.outputs) @@ -46,7 +47,7 @@ h5_op.run() ############################################################################### -# Export the result fields container in csv format +# Export the result fields container to a CSV file. csv_op = ops.serialization.field_to_csv() csv_op.inputs.field_or_fields_container.connect(matrices_provider.outputs) diff --git a/examples/04-specific-requests/README.txt b/examples/04-specific-requests/README.txt index 57bf49d586..c2f97b5273 100644 --- a/examples/04-specific-requests/README.txt +++ b/examples/04-specific-requests/README.txt @@ -1,5 +1,5 @@ .. _specific_requests: -Examples that targets specific requests -======================================= -These demos show how to solve specific requests. +Target-specific request examples +================================ +These examples show how to solve requests for specific targets. diff --git a/examples/05-plotting/01-compare_results.py b/examples/05-plotting/01-compare_results.py index da787f4f2d..bf5e690568 100644 --- a/examples/05-plotting/01-compare_results.py +++ b/examples/05-plotting/01-compare_results.py @@ -1,11 +1,10 @@ """ .. _compare_results: -Compare Results Using the Plotter +Compare results using the plotter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how to plot several meshes/results combination -over the same plotter, in order to compare them. The usecase will be -to compare results at different time steps. +This example shows how to plot several mesh/result combinations on the +same plot so that you can compare results at different time steps. """ @@ -16,8 +15,8 @@ ############################################################################### # Compare two results # ~~~~~~~~~~~~~~~~~~~ -# Now we will use an :class:`ansys.dpf.core.plotter.DpfPlotter` to plot two different -# results over the same mesh and make a comparison. +# Use the :class:`ansys.dpf.core.plotter.DpfPlotter` class to plot two different +# results over the same mesh and compare them. # Here we create a Model and request its mesh model = dpf.Model(examples.msup_transient) @@ -30,12 +29,12 @@ displacement_set15 = displacement_operator.outputs.fields_container()[1] ############################################################################### -# Now we create an :class:`ansys.dpf.core.plotter.DpfPlotter` and add the -# first mesh and the first result +# Use the :class:`ansys.dpf.core.plotter.DpfPlotter` class to add plots for the +# first mesh and the first result. pl = DpfPlotter() pl.add_field(displacement_set2, mesh_set2) -# Then it is needed to create a new mesh and translate it along x axis +# Create a new mesh and translate it along the x axis. mesh_set15 = mesh_set2.deep_copy() overall_field = dpf.fields_factory.create_3d_vector_field(1, dpf.locations.overall) overall_field.append([0.2, 0.0, 0.0], 1) @@ -44,7 +43,7 @@ coordinates_updated = add_operator.outputs.field() coordinates_to_update.data = coordinates_updated.data -# Finally we feed the DpfPlotter with the second mesh and the second result -# and we plot the result +# Use the :class:`ansys.dpf.core.plotter.DpfPlotter` class to add plots for the +# second mesh and the second result. pl.add_field(displacement_set15, mesh_set15) pl.show_figure(show_axes=True) diff --git a/examples/05-plotting/02-solution_combination.py b/examples/05-plotting/02-solution_combination.py index 469714341f..ac6095112d 100644 --- a/examples/05-plotting/02-solution_combination.py +++ b/examples/05-plotting/02-solution_combination.py @@ -1,28 +1,27 @@ """ .. _solution_combination: -Load Case Combination for Principal Stress +Load case combination for principal stress ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how to get a principal stress loadcase combination using DPF +This example shows how to get a principal stress load case combination using DPF And highlight min/max values in the plot. """ ############################################################################### -# First, import the DPF-Core module as ``dpf_core`` and import the -# included examples file and ``DpfPlotter`` +# Import the ``dpf_core`` module, included examples file, and the ``DpfPlotter`` +# module. from ansys.dpf import core as dpf from ansys.dpf.core import examples from ansys.dpf.core.plotter import DpfPlotter ############################################################################### -# Next, open an example and print out the ``model`` object. The -# :class:`Model ` class helps to organize access +# Open an example and print the ``Model`` object. The +# # :class:`Model ` class helps to organize access # methods for the result by keeping track of the operators and data sources -# used by the result -# file. +# used by the result file. # -# Printing the model displays: +# Printing the model displays this metadata: # # - Analysis type # - Available results @@ -33,9 +32,9 @@ print(model) ############################################################################### -# Get the stress tensor and connect time scoping. -# Make sure to define ``Nodal`` as the requested location, -# as the labels are supported only for Nodal results. +# Get the stress tensor and ``connect`` time scoping. +# # Make sure that you define ``"Nodal"`` as the scoping location because +# labels are supported only for nodal results. # stress_tensor = model.results.stress() time_scope = dpf.Scoping() @@ -46,16 +45,17 @@ ############################################################################### # This code performs solution combination on two load cases. # =>LC1 - LC2 -# You can access individual loadcases as the fields of a fields_container for ``stress_tensor``. +# You can access individual load cases as the fields of a fields container for +# The stress tensor. # LC1: stress_tensor.outputs.fields_container.get_data()[0] # LC2: stress_tensor.outputs.fields_container.get_data()[1] # -# Scale LC2 to -1 +# Scale LC2 to -1. field_lc2 = stress_tensor.outputs.fields_container.get_data()[1] stress_tensor_lc2_sc = dpf.operators.math.scale(field=field_lc2, ponderation=-1.0) ############################################################################### -# Add load cases +# Add load cases. # field_lc1 = stress_tensor.outputs.fields_container.get_data()[0] stress_tensor_combi = dpf.operators.math.add( @@ -63,30 +63,31 @@ ) ############################################################################### -# Principal Stresses are the Eigenvalues of the stress tensor. -# Use ``principal_invariants`` to get S1, S2 and S3 +# Principal stresses are the Eigenvalues of the stress tensor. +# Use principal invariants to get S1, S2, and S3. # p_inv = dpf.operators.invariant.principal_invariants() p_inv.inputs.field.connect(stress_tensor_combi) ############################################################################### -# Print S1 - Maximum Principal stress +# Print S1 (maximum principal stress). # print(p_inv.outputs.field_eig_1().data) ############################################################################### -# Get the meshed region +# Get the meshed region. # mesh_set = model.metadata.meshed_region ############################################################################### # Plot the results on the mesh. -# ``label_text_size`` and ``label_point_size`` control font size of the label. +# The ``label_text_size`` and ``label_point_size`` arguments control the font +# size of the label. # plot = DpfPlotter() plot.add_field(p_inv.outputs.field_eig_1(), meshed_region=mesh_set) -# You can set the camera positions using the `cpos` argument -# The three tuples in the list `cpos` represent camera position- -# focal point, and view up respectively. +# You can set the camera positions using the ``cpos`` argument. +# The three tuples in the list for the ``cpos`` argument represent the camera +# position, focal point, and view respectively. plot.show_figure(show_axes=True) diff --git a/examples/05-plotting/03-labels.py b/examples/05-plotting/03-labels.py index e2fe69387d..b92a9a9ba8 100644 --- a/examples/05-plotting/03-labels.py +++ b/examples/05-plotting/03-labels.py @@ -1,27 +1,26 @@ """ .. _labels: -Add Nodal Labels on Plots +Add nodal labels on plots ~~~~~~~~~~~~~~~~~~~~~~~~~ -You can custom labels to specific nodes with specific label properties. -If label for a node is missing, by default nodal scalar value is shown. +You can add use label properties to add custom labels to specific nodes. +If label for a node is missing, the nodal scalar value is shown by default. """ ############################################################################### -# First, import the DPF-Core module as ``dpf_core`` and import the -# included examples file and ``DpfPlotter`` +# Import the ``dpf_core`` module, included examples files, and the ``DpfPlotter`` +# module. from ansys.dpf import core as dpf from ansys.dpf.core import examples from ansys.dpf.core.plotter import DpfPlotter ############################################################################### -# Next, open an example and print out the ``model`` object. The +# Open an example and print the ``Model`` object. The # :class:`Model ` class helps to organize access # methods for the result by keeping track of the operators and data sources -# used by the result -# file. +# used by the result file. # -# Printing the model displays: +# Printing the model displays this metadata: # # - Analysis type # - Available results @@ -32,9 +31,9 @@ print(model) ############################################################################### -# Get the stress tensor and connect time scoping. -# Make sure to define the scoping as ``"Nodal"`` as the requested location, -# as the labels are supported only for Nodal results. +# Get the stress tensor and ``connect`` time scoping. +# Make sure that you define ``"Nodal"`` as the scoping location because +# labels are supported only for nodal results. # stress_tensor = model.results.stress() time_scope = dpf.Scoping() @@ -55,12 +54,12 @@ field_norm_disp = norm_op2.outputs.fields_container()[0] print(field_norm_disp) ############################################################################### -# Get the meshed region +# Get the meshed region. # mesh_set = model.metadata.meshed_region ############################################################################### -# Plot the results on the mesh, show the minimum and maximum. +# Plot the results on the mesh and show the minimum and maximum. # plot = DpfPlotter() plot.add_field( @@ -73,8 +72,8 @@ ) -# Add custom labels to specific nodes with specific label properties. -# If label for a node is missing, by default nodal value is shown. +# Use label properties to add custom labels to specific nodes. +# If a label for a node is missing, the nodal value is shown by default. my_nodes_1 = [mesh_set.nodes[0], mesh_set.nodes[10]] my_labels_1 = ["MyNode1", "MyNode2"] @@ -106,10 +105,10 @@ point_size=15, ) -# Show figure -# You can set the camera positions using the `cpos` argument -# The three tuples in the list `cpos` represent camera position- -# focal point, and view up respectively. +# Show figure. +# You can set the camera positions using the ``cpos`` argument. +# The three tuples in the list for the ``cpos`` argument represent the camera +# position, focal point, and view respectively. plot.show_figure( show_axes=True, cpos=[(0.123, 0.095, 1.069), (-0.121, -0.149, 0.825), (0.0, 0.0, 1.0)], diff --git a/examples/05-plotting/04-plot_on_path.py b/examples/05-plotting/04-plot_on_path.py index 664d3dba22..af510f1b68 100644 --- a/examples/05-plotting/04-plot_on_path.py +++ b/examples/05-plotting/04-plot_on_path.py @@ -3,7 +3,7 @@ Plot results on a specific path ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how to get a result mapped over a specific path, +This example shows how to get a result mapped over a specific path and how to plot it. """ @@ -14,19 +14,18 @@ from ansys.dpf.core.plotter import DpfPlotter ############################################################################### -# Path plotting -# ~~~~~~~~~~~~~ -# We will use an :class:`ansys.dpf.core.plotter.DpfPlotter` to plot a mapped result over -# a defined path of coordinates. +# Plot path +# ~~~~~~~~~ +# Use the :class:`ansys.dpf.core.plotter.DpfPlotter` class to plot a mapped +# result over a defined path of coordinates. -# First, we need to create the model, request its mesh and its -# displacement data +# Create the model and request its mesh and displacement data. model = dpf.Model(examples.static_rst) mesh = model.metadata.meshed_region stress_fc = model.results.stress().eqv().eval() ############################################################################### -# Then, we create a coordinates field to map on +# Create a coordinates field to map on. coordinates = [[0.024, 0.03, 0.003]] for i in range(1, 51): coord_copy = coordinates[0].copy() @@ -37,7 +36,7 @@ field_coord.scoping.ids = list(range(1, len(coordinates) + 1)) ############################################################################### -# Let's now compute the mapped data using the mapping operator +# Compute the mapped data using the mapping operator. mapping_operator = ops.mapping.on_coordinates( fields_container=stress_fc, coordinates=field_coord, @@ -46,17 +45,17 @@ fields_mapped = mapping_operator.outputs.fields_container() ############################################################################### -# Here, we request the mapped field data and its mesh +# Request the mapped field data and its mesh. field_m = fields_mapped[0] mesh_m = field_m.meshed_region ############################################################################### -# Now we create the plotter and add fields and meshes +# Create the plotter and add fields and meshes. pl = DpfPlotter() pl.add_field(field_m, mesh_m) pl.add_mesh(mesh, style="surface", show_edges=True, color="w", opacity=0.3) -# Finally we plot the result +# Plot the result. pl.show_figure(show_axes=True) diff --git a/examples/05-plotting/README.txt b/examples/05-plotting/README.txt index 166cb00133..941a88e4d0 100644 --- a/examples/05-plotting/README.txt +++ b/examples/05-plotting/README.txt @@ -1,5 +1,5 @@ .. _plotting_examples: -Plotting Examples +Plotting examples ================= -These demos show how to use the DpfPlotter. \ No newline at end of file +These examples show how to use the ``DpfPlotter`` module. \ No newline at end of file diff --git a/examples/06-distributed-post/00-distributed_total_disp.py b/examples/06-distributed-post/00-distributed_total_disp.py index 5d753d6545..afbf06ee34 100644 --- a/examples/06-distributed-post/00-distributed_total_disp.py +++ b/examples/06-distributed-post/00-distributed_total_disp.py @@ -1,11 +1,11 @@ """ .. _ref_distributed_total_disp: -Post processing of displacement on distributed processes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Postprocessing of displacement on distributed processes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To help understand this example the following diagram is provided. It shows -the operator chain used to compute the final result. +This diagram helps you to understand this example. It shows +the operator chain that is used to compute the final result. .. graphviz:: @@ -48,22 +48,23 @@ """ ############################################################################### -# Import dpf module and its examples files +# Import the ``dpf-core`` module and its examples files. from ansys.dpf import core as dpf from ansys.dpf.core import examples from ansys.dpf.core import operators as ops ############################################################################### -# Configure the servers -# ~~~~~~~~~~~~~~~~~~~~~~ -# Make a list of ip addresses and port numbers on which dpf servers are -# started. Operator instances will be created on each of those servers to -# address each a different result file. -# In this example, we will post process an analysis distributed in 2 files, -# we will consequently require 2 remote processes. -# To make this example easier, we will start local servers here, -# but we could get connected to any existing servers on the network. +# Configure the servers. +# Make a list of IP addresses and port numbers that DPF servers start and +# listen on. Operator instances are created on each of these servers so that +# each can address a different result file. +# +# This example postprocesses an analysis distributed in two files. +# Consequently, it require two remote processes. +# +# To make it easier, this example starts local servers. However, you can +# connect to any existing servers on your network. global_server = dpf.start_local_server( as_global=True, config=dpf.AvailableServerConfigs.InProcessServer @@ -79,25 +80,27 @@ ports = [remote_server.port for remote_server in remote_servers] ############################################################################### -# Print the ips and ports +# Print the IP addresses and ports. print("ips:", ips) print("ports:", ports) ############################################################################### -# Here we show how we could send files in temporary directory if we were not -# in shared memory +# Send files to the temporary directory if they are not in shared memory. files = examples.download_distributed_files() server_file_paths = [dpf.upload_file_in_tmp_folder(files[0], server=remote_servers[0]), dpf.upload_file_in_tmp_folder(files[1], server=remote_servers[1])] ############################################################################### -# Create the operators on the servers -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# On each server we create two new operators for 'displacement' and 'norm' -# computations and define their data sources. The displacement operator -# receives data from the data file in its respective server. And the norm -# operator, being chained to the displacement operator, receives input from the -# output of this one. +# Create operators on each server +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# On each server, create two operators, one for displacement computations +# and one for norm computations. Define their data sources: + +# - The displacement operator receives data from the data file in its respective +# server. +# - The norm operator, which is chained to the displacement operator, receives +# input from the output of the displacement operator. +# remote_operators = [] for i, server in enumerate(remote_servers): displacement = ops.result.displacement(server=server) @@ -107,8 +110,9 @@ displacement.inputs.data_sources(ds) ############################################################################### -# Create a merge_fields_containers operator able to merge the results -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Create an operator to merge results +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Create the ``merge_fields_containers`` operator to merge the results. merge = ops.utility.merge_fields_containers() diff --git a/examples/06-distributed-post/01-distributed_workflows_on_remote.py b/examples/06-distributed-post/01-distributed_workflows_on_remote.py index 65c0182d80..5c768c85f1 100644 --- a/examples/06-distributed-post/01-distributed_workflows_on_remote.py +++ b/examples/06-distributed-post/01-distributed_workflows_on_remote.py @@ -1,13 +1,14 @@ """ .. _ref_distributed_workflows_on_remote: -Create custom workflow on distributed processes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how distributed files can be read and post processed -on distributed processes. After remote post processing, -results are merged on the local process. In this example, different operator -sequences are directly created on different servers. These operators are then -connected together without having to care that they are on remote processes. +Create a custom workflow on distributed processes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This example shows how to read and postprocess distributed files on +distributed processes. After remote postprocessing, results are merged +on the local process. This example creates different operator +sequences directly on different servers. These operators are then +connected together so that you don't have to care that they are on +remote processes. .. graphviz:: @@ -44,17 +45,16 @@ """ ############################################################################### -# Import dpf module and its examples files +# Import the ``dpf-core`` module and its examples files. from ansys.dpf import core as dpf from ansys.dpf.core import examples from ansys.dpf.core import operators as ops ############################################################################### -# Configure the servers -# ~~~~~~~~~~~~~~~~~~~~~ -# To make this example easier, we will start local servers here, -# but we could get connected to any existing servers on the network. +# Configure the servers. +# To make it easier, this example starts local servers. However, you can +# connect to any existing servers on your network. global_server = dpf.start_local_server( as_global=True, config=dpf.AvailableServerConfigs.InProcessServer @@ -68,15 +68,14 @@ ] ############################################################################### -# Here we show how we could send files in temporary directory if we were not -# in shared memory +# Send files to the temporary directory if they are not in shared memory. files = examples.download_distributed_files() server_file_paths = [dpf.upload_file_in_tmp_folder(files[0], server=remote_servers[0]), dpf.upload_file_in_tmp_folder(files[1], server=remote_servers[1])] ############################################################################### -# First operator chain. +# Create the first operator chain. remote_operators = [] @@ -86,7 +85,7 @@ stress1.inputs.data_sources(ds) ############################################################################### -# Second operator chain. +# Create the second operator chain. stress2 = ops.result.stress(server=remote_servers[1]) mul = stress2 * 2.0 @@ -95,13 +94,12 @@ stress2.inputs.data_sources(ds) ############################################################################### -# Local merge operator. +# Create the local merge operator. merge = ops.utility.merge_fields_containers() ############################################################################### -# Connect the operator chains together and get the output -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Connect the operator chains together and get the output. nodal = ops.averaging.to_nodal_fc(merge) diff --git a/examples/06-distributed-post/02-distributed-msup_expansion.py b/examples/06-distributed-post/02-distributed-msup_expansion.py index 1c0846c508..57128cab63 100644 --- a/examples/06-distributed-post/02-distributed-msup_expansion.py +++ b/examples/06-distributed-post/02-distributed-msup_expansion.py @@ -1,15 +1,15 @@ """ .. _ref_distributed_msup: -Distributed modal superposition -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how distributed files can be read and expanded -on distributed processes. The modal basis (2 distributed files) is read -on 2 remote servers and the modal response reading and the expansion is -done on a third server. +Distributed mode superposition (MSUP) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This example shows how to read and expand distributed files +on distributed processes. The modal basis (two distributed files) is read +on two remote servers. The modal response is then read and expanded on a +third server. -To help understand this example the following diagram is provided. It shows -the operator chain used to compute the final result. +The following diagram helps you to understand this example. It shows the operator +chain that is used to compute the final result. .. graphviz:: @@ -68,7 +68,7 @@ """ ############################################################################### -# Import dpf module and its examples files. +# Import the ``dpf-core`` module and its examples files. from ansys.dpf import core as dpf from ansys.dpf.core import examples @@ -77,13 +77,15 @@ ############################################################################### # Configure the servers # ~~~~~~~~~~~~~~~~~~~~~ -# Make a list of ip addresses and port numbers on which dpf servers are -# started. Operator instances will be created on each of those servers to -# address each a different result file. -# In this example, we will post process an analysis distributed in 2 files, -# we will consequently require 2 remote processes. -# To make this example easier, we will start local servers here, -# but we could get connected to any existing servers on the network. +# Make a list of IP addresses and port numbers that DPF servers start and +# listen on. Operator instances are created on each of these servers so that +# each server can address a different result file. +# +# This example postprocesses an analysis distributed in two files. +# Consequently, it requires two remote processes. +# +# To make it easier, this example starts local servers. However, you can +# connect to any existing servers on your network. global_server = dpf.start_local_server( as_global=True, config=dpf.AvailableServerConfigs.InProcessServer @@ -99,23 +101,25 @@ ports = [remote_server.port for remote_server in remote_servers] ############################################################################### -# Print the ips and ports. +# Print the IP addresses and ports. print("ips:", ips) print("ports:", ports) ############################################################################### -# Choose the file path. +# Specify the file path. base_path = examples.distributed_msup_folder files = [base_path + r'/file0.mode', base_path + r'/file1.mode'] files_aux = [base_path + r'/file0.rst', base_path + r'/file1.rst'] ############################################################################### -# Create the operators on the servers -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# On each server we create two new operators, one for 'displacement' computations -# and a 'mesh_provider' operator and then define their data sources. The displacement -# and mesh_provider operators receive data from their respective data files on each server. +# Create operators on each server +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# On each server, create two operators, one for displacement computations +# and one for providing the mesh. Then, define their data sources. Both the +# displacement operator and mesh provider operator receive data from their +# respective data files on each server. + remote_displacement_operators = [] remote_mesh_operators = [] for i, server in enumerate(remote_servers): @@ -129,10 +133,10 @@ mesh.inputs.data_sources(ds) ############################################################################### -# Create a local operators chain for expansion -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In the following series of operators we merge the modal basis, the meshes, read -# the modal response and expand the modal response with the modal basis. +# Create a local operator chain for expansion +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# The following series of operators merge the modal basis and the meshes, read +# the modal response, and expand the modal response with the modal basis. merge_fields = ops.utility.merge_fields_containers() merge_mesh = ops.utility.merge_meshes() diff --git a/examples/06-distributed-post/03-distributed-msup_expansion_steps.py b/examples/06-distributed-post/03-distributed-msup_expansion_steps.py index 487dda9c82..c66d3b1120 100644 --- a/examples/06-distributed-post/03-distributed-msup_expansion_steps.py +++ b/examples/06-distributed-post/03-distributed-msup_expansion_steps.py @@ -1,15 +1,15 @@ """ .. _ref_distributed_msup_steps: -Distributed msup distributed modal response +Distributed MSUP distributed modal response ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how distributed files can be read and expanded -on distributed processes. The modal basis (2 distributed files) is read -on 2 remote servers and the modal response (2 distributed files) reading and the expansion is -done on a third server. +This example shows how to read and expand distributed files on distributed +processes. The modal basis (two distributed files) is read on two remote +servers. The modal response (two distributed files) is then read and expanded +on a third server. -To help understand this example the following diagram is provided. It shows -the operator chain used to compute the final result. +The following diagram helps you to understand this example. It shows the operator +chain that is used to compute the final result. .. graphviz:: @@ -71,7 +71,7 @@ """ ############################################################################### -# Import dpf module and its examples files. +# Import the ``dpf-core`` module and its examples files. import os.path from ansys.dpf import core as dpf @@ -81,13 +81,15 @@ ############################################################################### # Configure the servers # ~~~~~~~~~~~~~~~~~~~~~ -# Make a list of ip addresses and port numbers on which dpf servers are -# started. Operator instances will be created on each of those servers to -# address each a different result file. -# In this example, we will post process an analysis distributed in 2 files, -# we will consequently require 2 remote processes -# To make this example easier, we will start local servers here, -# but we could get connected to any existing servers on the network. +# Make a list of IP addresses and port numbers that DPF servers start and +# listen on. Operator instances are created on each of these servers so that +# each server can address a different result file. +# +# This example postprocesses an analysis distributed in two files. +# Consequently, it requires two remote processes. +# +# To make it easier, this example starts local servers. However, you can +# connect to any existing servers on your network. global_server = dpf.start_local_server( as_global=True, config=dpf.AvailableServerConfigs.InProcessServer @@ -104,23 +106,25 @@ ports = [remote_server.port for remote_server in remote_servers] ############################################################################### -# Print the ips and ports. +# Print the IP addresses and ports. print("ips:", ips) print("ports:", ports) ############################################################################### -# Choose the file path. +# Specify the file path. base_path = examples.distributed_msup_folder files = [os.path.join(base_path, "file0.mode"), os.path.join(base_path, "file1.mode")] files_aux = [os.path.join(base_path, "file0.rst"), os.path.join(base_path, "file1.rst")] ############################################################################### -# Create the operators on the servers -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# On each server we create two new operators, one for 'displacement' computations -# and a 'mesh_provider' operator, and then define their data sources. The displacement -# and mesh_provider operators receive data from their respective data files on each server. +# Create operators on each server +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# On each server, create two operators, one for displacement computations +# and one for providing the mesh. Then, define their data sources. Both the +# displacement operator and mesh provider operator receive data from their +# respective data files on each server. + remote_displacement_operators = [] remote_mesh_operators = [] for i, server in enumerate(remote_servers): @@ -134,10 +138,10 @@ mesh.inputs.data_sources(ds) ############################################################################### -# Create a local operators chain for expansion -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# In the following series of operators we merge the modal basis, the meshes, read -# the modal response and expand the modal response with the modal basis. +# Create a local operator chain for expansion +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# The following series of operators merge the modal basis and the meshes, read +# the modal response, and expand the modal response with the modal basis. merge_fields = ops.utility.merge_fields_containers() merge_mesh = ops.utility.merge_meshes() diff --git a/examples/06-distributed-post/README.txt b/examples/06-distributed-post/README.txt index f62374238b..f1cb2933a3 100644 --- a/examples/06-distributed-post/README.txt +++ b/examples/06-distributed-post/README.txt @@ -1,5 +1,6 @@ .. _distributed_post: -Examples for post processing on distributed process -=================================================== -These demos show how to create worflows on different processes (possibly on different machine) and how to connect those together. +Examples for postprocessing on distributed processes +================================================_=== +These examples show how to create workflows on different processes (possibly on +different machines) and connect them. diff --git a/examples/07-python-operators/00-wrapping_numpy_capabilities.py b/examples/07-python-operators/00-wrapping_numpy_capabilities.py index e5c8a47e24..3f272f8b79 100644 --- a/examples/07-python-operators/00-wrapping_numpy_capabilities.py +++ b/examples/07-python-operators/00-wrapping_numpy_capabilities.py @@ -1,27 +1,32 @@ """ .. _ref_wrapping_numpy_capabilities: -Write user defined Operator -~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how to create a simple DPF python plugin holding a single Operator. -This Operator called "easy_statistics" computes simple statistics quantities on a scalar Field with -the help of numpy. -It's a simple example displaying how routines can be wrapped in DPF python plugins. +Create a basic operator plugin +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This example shows how to create a basic operator plugin, which is for +a single custom operator. This custom operator, ``easy_statistics``, +computes simple statistics quantities on a scalar field with the help of +the ``numpy`` package. + +The objective of this simple example is to show how routines for DPF can +be wrapped in Python plugins. """ ############################################################################### -# Write Operator -# -------------- -# To write the simplest DPF python plugins, a single python script is necessary. -# An Operator implementation deriving from -# :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` -# and a call to :py:func:`ansys.dpf.core.custom_operator.record_operator` -# are the 2 necessary steps to create a plugin. -# The "easy_statistics" Operator will take a Field in input and return -# the first quartile, the median, -# the third quartile and the variance. The python Operator and its recording seat in the -# file plugins/easy_statistics.py. This file `easy_statistics.py` is downloaded -# and displayed here: +# Create the operator +# ------------------- +# Creating a basic operator plugin consists of writing a single Python script. +# An operator implementation derives from the +# :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` class +# and a call to the :py:func:`ansys.dpf.core.custom_operator.record_operator` +# method. +# +# The ``easy_statistics`` operator takes a field as an input and returns +# the first quartile, the median, the third quartile, and the variance. +# The Python operator and its recording are available in the +# ``easy_statistics.py`` file. +# +# Download and display the Python script. from ansys.dpf.core import examples @@ -37,34 +42,39 @@ print('\t\t\t' + line) ############################################################################### -# Load Plugin -# ----------- -# Once a python plugin is written, it can be loaded with the function -# :py:func:`ansys.dpf.core.core.load_library` -# taking as first argument the path to the directory of the plugin, as second argument -# ``py_`` + the name of -# the python script, and as last argument the function's name used to record operators. +# Load the plugin +# --------------- +# You use the :py:func:`ansys.dpf.core.core.load_library` method to load the +# plugin. + +# - The first argument is the path to the directory where the plugin +# is located. +# - The second argument is ``py_`` plus the name of the Python script. +# - The third argument is the name of the function used to record operators. +# import os from ansys.dpf import core as dpf from ansys.dpf.core import examples -# python plugins are not supported in process +# Python plugins are not supported in process. dpf.start_local_server(config=dpf.AvailableServerConfigs.GrpcServer) operator_server_file_path = dpf.upload_file_in_tmp_folder(operator_file_path) dpf.load_library(os.path.dirname(operator_server_file_path), "py_easy_statistics", "load_operators") ############################################################################### -# Once the Operator loaded, it can be instantiated with: +# Instantiate the operator. new_operator = dpf.Operator("easy_statistics") ############################################################################### -# To use this new Operator, a workflow computing the norm of the displacement -# is connected to the "easy_statistics" Operator. -# Methods of the class ``easy_statistics`` are dynamically added thanks to the Operator's -# specification defined in the plugin. +# Connect a workflow +# ------------------ +# Connect a workflow that computes the norm of the displacement to the +# ``easy_statistics`` operator. Methods of the ``easy_statistics`` class +# are dynamically added because specifications for the operator are +# defined in the plugin. # %% # .. graphviz:: @@ -81,8 +91,8 @@ # } ############################################################################### -# Use the Custom Operator -# ----------------------- +# Use the operator +# ---------------- ds = dpf.DataSources(dpf.upload_file_in_tmp_folder(examples.static_rst)) displacement = dpf.operators.result.displacement(data_sources=ds) diff --git a/examples/07-python-operators/01-package_python_operators.py b/examples/07-python-operators/01-package_python_operators.py index e304750f2e..5d145e3795 100644 --- a/examples/07-python-operators/01-package_python_operators.py +++ b/examples/07-python-operators/01-package_python_operators.py @@ -1,31 +1,32 @@ """ .. _ref_python_plugin_package: -Write user defined Operators as a package -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how more complex DPF python plugins of Operators can be -created as standard python packages. -The benefits of writing packages instead of simple scripts are: -componentization (split the code in several -python modules or files), distribution (with packages, -standard python tools can be used to upload and -download packages) and documentation (READMEs, docs, tests and -examples can be added to the package). - -This plugin will hold 2 different Operators: - - One returning all the scoping ids having data higher than the average - - One returning all the scoping ids having data lower than the average +Create a plug-in package with multiple operators +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This example shows how to create a plug-in package with multiple operators. +The benefits of writing packages rather than simple scripts are: + +- **Componentization:** You can split the code into several Python modules or files. +- **Distribution:** You can use standard Python tools to upload and download packages. +- **Documentation:** You can add README files, documentation, tests, and examples to the package. + +For this example, the plug-in package contains two different operators: + +- One that returns all scoping ID having data higher than the average +- One that returns all scoping IDs having data lower than the average + """ ############################################################################### -# Write Operator -# -------------- -# For this more advanced use case, a python package is created. -# Each Operator implementation derives from -# :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` -# and a call to :py:func:`ansys.dpf.core.custom_operator.record_operator` -# records the Operators of the plugin. -# The python package `average_filter_plugin` is downloaded and displayed here: +# Create the plug-in package +# -------------------------- +# Each operator implementation derives from the +# :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` class +# and a call to the :py:func:`ansys.dpf.core.custom_operator.record_operator` +# method, which records the operators of the plug-in package. +# +# Download the ``average_filter_plugin`` plug-in package that has already been +# created for you. import os from ansys.dpf.core import examples @@ -50,20 +51,23 @@ ############################################################################### -# Load Plugin -# ----------- -# Once a python plugin is written as a package, it can be loaded with the function -# :py:func:`ansys.dpf.core.core.load_library` taking as first argument the -# path to the directory of the plugin, -# as second argument ``py_`` + any name identifying the plugin, -# and as last argument the function's name exposed in the __init__ file -# and used to record operators. +# Load the plug-in package +# ------------------------ +# You use the function :py:func:`ansys.dpf.core.core.load_library` to load the +# plug-in package. +# +# - The first argument is the path to the directory where the plug-in package +# is located. +# - The second argument is ``py_`` plus any name identifying the plug-in package. +# - The third argument is the name of the function exposed in the ``__init__ file`` +# for the plug-in package that is used to record operators. +# import os from ansys.dpf import core as dpf from ansys.dpf.core import examples -# python plugins are not supported in process +# Python plugins are not supported in process. dpf.start_local_server(config=dpf.AvailableServerConfigs.GrpcServer) tmp = dpf.make_tmp_dir_server() @@ -77,15 +81,18 @@ "load_operators") ############################################################################### -# Once the Plugin loaded, Operators recorded in the plugin can be used with: +# Instantiate the operator. new_operator = dpf.Operator("ids_with_data_lower_than_average") ############################################################################### -# To use this new Operator, a workflow computing the norm of the displacement -# is connected to the "ids_with_data_lower_than_average" Operator. -# Methods of the class ``ids_with_data_lower_than_average`` are dynamically -# added thanks to the Operator's specification. +# Connect a workflow +# ------------------ +# Connect a workflow that computes the norm of the displacement +# to the ``ids_with_data_lower_than_average`` operator. +# Methods of the ``ids_with_data_lower_than_average`` class are dynamically +# added because specifications for the operator are defined in the plug-in +# package. # %% # .. graphviz:: @@ -102,8 +109,8 @@ # } ############################################################################### -# Use the Custom Operator -# ----------------------- +# Use the operator +# ---------------- ds = dpf.DataSources(dpf.upload_file_in_tmp_folder(examples.static_rst)) displacement = dpf.operators.result.displacement(data_sources=ds) diff --git a/examples/07-python-operators/02-python_operators_with_dependencies.py b/examples/07-python-operators/02-python_operators_with_dependencies.py index 5e7f929e69..09a2f62df8 100644 --- a/examples/07-python-operators/02-python_operators_with_dependencies.py +++ b/examples/07-python-operators/02-python_operators_with_dependencies.py @@ -1,33 +1,32 @@ """ .. _ref_python_operators_with_deps: -Write user defined Operators having third party dependencies -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This example shows how advanced DPF python plugins of Operators -can be created as standard python packages -and how third party python modules dependencies can be added to the package. -For a first introduction on user defined python Operators see example -:ref:`ref_wrapping_numpy_capabilities` -and for a simpler example on user defined python Operators as a package -see :ref:`ref_python_plugin_package`. - -This plugin will hold an Operator which implementation depends on a -third party python module named -`gltf `_. This Operator takes a path, -a mesh and 3D vector field in input and exports the mesh and the norm of the input -field in a gltf file located at the given path. - +Create a plug-in package that has third-party dependencies +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This example shows how to create a Python plug-in package with +third-party dependencies. You should be familiar with these +examples before proceeding with this more advanced one: + +- :ref:`ref_wrapping_numpy_capabilities` +- :ref:`ref_python_plugin_package` + +This plug-in contains an operator whose implementation depends on a +third-party Python module named `gltf `_. +This operator takes a path, a mesh, and a 3D vector field as inputs +and then exports the mesh and the norm of the 3D vector field to a GLTF +file at the given path. """ ############################################################################### -# Write Operator -# -------------- -# For this more advanced use case, a python package is created. -# Each Operator implementation derives from -# :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` -# and a call to :py:func:`ansys.dpf.core.custom_operator.record_operator` -# records the Operators of the plugin. -# The python package `gltf_plugin` is downloaded and displayed here: +# Create the plug-in package +# -------------------------- +# Each operator implementation derives from the +# :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` class +# and a call to the :py:func:`ansys.dpf.core.custom_operator.record_operator` +# method, which records the operators of the plug-in package. +# +# Download the ```gltf_plugin`` plug-in package that has already been +# created for you. import os from ansys.dpf.core import examples @@ -58,19 +57,22 @@ plugin_path = os.path.dirname(operator_file_path) # %% -# To add third party modules as dependencies to a custom DPF python plugin, -# a folder or zip file with the sites of the dependencies needs to be created -# and referenced in an xml located next to the plugin's folder -# and having the same name as the plugin plus the ``.xml`` extension. The ``site`` -# python module is used by DPF when -# calling :py:func:`ansys.dpf.core.core.load_library` function to add these custom -# sites to the python interpreter path. -# To create these custom sites, the requirements of the custom plugin should be -# installed in a python virtual environment, the site-packages -# (with unnecessary folders removed) should be zipped and put with the plugin. The -# path to this zip should be referenced in the xml as done above. -# -# To simplify this step, a requirements file can be added in the plugin, like: +# To add third-party modules as dependencies to a plug-in package, you must +# create and reference a folder or ZIP file with the sites of the dependencies +# in an XML file located next to the folder for the plug-in package. The XML +# file must have the same name as the plug-in package plus an ``.xml`` extension. +# +# When the :py:func:`ansys.dpf.core.core.load_library` method is called, +# DPF-Core uses the ``site`` Python module to add custom to the path +# for the Python interpreter. +# +# To create these custom sites, requirements of the plug-in package should be +# installed in a Python virtual environment, the site-packages +# (with unnecessary folders removed) should be compressed to a ZIP file and +# placed with the plugin. The path to this ZIP file should be referenced in +# the XML as shown in the preceding code. +# +# To simplify this step, you can add a requirements file in the plug-in package: # print(f'\033[1m gltf_plugin/requirements.txt: \n \033[0m') with open(os.path.join(plugin_path, "requirements.txt"), "r") as f: @@ -79,24 +81,31 @@ # %% -# And this :download:`powershell script ` -# for windows or this :download:`shell script ` -# can be ran with the mandatory arguments: +# Download the script for your operating system. +# +# - For Windows, download this +# :download:`PowerShell script `. +# - For Linux, download this +# :download:`Shell script `. # -# - -pluginpath : path to the folder of the plugin. -# - -zippath : output zip file name. +# Run the downloaded script with the mandatory arguments: # -# optional arguments are: +# - ``-pluginpath``: Path to the folder with the plug-in package. +# - ``-zippath``: Path and name for the ZIP file. # -# - -pythonexe : path to a python executable of your choice. -# - -tempfolder : path to a temporary folder to work on, default is the environment variable -# ``TEMP`` on Windows and /tmp/ on Linux. +# Optional arguments are: # -# For windows powershell, call:: +# - ``-pythonexe``: Path to a Python executable of your choice. +# - ``-tempfolder``: Path to a temporary folder to work in. The default is the environment variable +# ``TEMP`` on Windows and ``/tmp/`` on Linux. +# +# Run the command for your operating system. +# +# - From Windows PowerShell, run:: # # create_sites_for_python_operators.ps1 -pluginpath /path/to/plugin -zippath /path/to/plugin/assets/winx64.zip # noqa: E501 # -# For linux shell, call:: +# - From Linux Shell, run:: # # create_sites_for_python_operators.sh -pluginpath /path/to/plugin -zippath /path/to/plugin/assets/linx64.zip # noqa: E501 @@ -133,19 +142,21 @@ print("\nInstalling pygltf in a virtual environment succeeded") ############################################################################### -# Load Plugin -# ----------- -# Once a python plugin is written as a package, it can be loaded with the function -# :py:func:`ansys.dpf.core.core.load_library` taking as first argument -# the path to the directory of the plugin, -# as second argument ``py_`` + any name identifying the plugin, -# and as last argument the function's name exposed in the ``__init__.py`` -# file and used to record operators. +# Load the plug-in package +# ------------------------ +# You use the function :py:func:`ansys.dpf.core.core.load_library` to load the +# plug-in package. +# +# - The first argument is the path to the directory where the plug-in package +# is located. +# - The second argument is ``py_`` plus any name identifying the plug-in package. +# - The third argument is the name of the function exposed in the ``__init__ file`` +# for the plug-in package that is used to record operators. from ansys.dpf import core as dpf from ansys.dpf.core import examples -# python plugins are not supported in process +# Python plugins are not supported in process. dpf.start_local_server(config=dpf.AvailableServerConfigs.GrpcServer) tmp = dpf.make_tmp_dir_server() @@ -164,17 +175,19 @@ "load_operators") ############################################################################### -# Once the Plugin loaded, Operators recorded in the plugin can be used with: +# Instantiate the operator. new_operator = dpf.Operator("gltf_export") -###############################################################################ser -# This new Operator ``gltf_export`` requires a triangle surface mesh, -# a displacement Field on this surface mesh -# as well as an export path as inputs. -# To demo this new Operator, a :class:`ansys.dpf.core.model.Model` on a simple file is created, -# :class:`ansys.dpf.core.operators.mesh.tri_mesh_skin` Operator is used -# to extract the surface of the mesh in triangles elements. +############################################################################### +# This new ``gltf_export`` operator requires the following as inputs: a triangle +# surface mesh, a displacement field on this surface mesh, and a path to export +# the GLTF file to. +# +# To demonstrate this new operator, a :class:`ansys.dpf.core.model.Model` class +# is created on a simple file and the +# :class:`ansys.dpf.core.operators.mesh.tri_mesh_skin` operator is used +# to extract the surface of the mesh in triangle elements. # %% # .. graphviz:: @@ -194,7 +207,7 @@ # } ############################################################################### -# Use the Custom Operator +# Use the custom operator # ----------------------- import os @@ -217,4 +230,4 @@ dpf.download_file(os.path.join(tmp, "out.glb"), os.path.join(os.getcwd(), "out.glb")) # %% -# The gltf Operator output can be downloaded :download:`here `. +# You can download :download:`output ` from the ``gltf`` operator. diff --git a/examples/07-python-operators/README.txt b/examples/07-python-operators/README.txt index 75f238a1d6..1e71b62687 100644 --- a/examples/07-python-operators/README.txt +++ b/examples/07-python-operators/README.txt @@ -1,6 +1,7 @@ .. _python_operators: -Examples of custom python plugins of Operators -============================================== -These demos show how to create your own DPF plugin using pyDPF API. These plugins hold custom -Operators enabling to wrap your custom capabilities and to use them as any native DPF's Operator. +Examples of creating custom operator plugins +============================================ +These examples show how to create a basic operator plugin or plug-in +packages with multiple operators. Plugins wrap your custom operators +so that you can use them like native DPF operators. diff --git a/examples/08-averaging/00-compute_and_average.py b/examples/08-averaging/00-compute_and_average.py index 1ce2933128..712433f947 100644 --- a/examples/08-averaging/00-compute_and_average.py +++ b/examples/08-averaging/00-compute_and_average.py @@ -3,14 +3,16 @@ Averaging order ~~~~~~~~~~~~~~~ -In this example, we compare two different workflows that accomplish the same task to see -how the order of the operators can change the end result. In the first case, we will extract the -stress field of a crankshaft under load from a results file, compute the equivalent (Von Mises) -stresses and then apply an averaging operator to transpose them from an ElementalNodal -to a Nodal position. In the second case, however, we will firstly transpose the -stresses that come from the results file to a Nodal position and only then calculate -the Von Mises stresses. -These workflows can be better visualized in the images below: +This example compares two different workflows that accomplish the same task to show +how the order of the operators can change the end result. + +- The first workflow extracts the stress field of a crankshaft under load from a + result file, computes the equivalent (von Mises) stresses, and then applies an + averaging operator to transpose them from ``ElementalNodal`` to ``Nodal`` positions. +- The second workflow first transposes the stresses that come from the result file + to a ``Nodal`` position and then calculates the von Mises stresses. + +The following images shows these workflows: .. graphviz:: @@ -52,45 +54,45 @@ } """ ############################################################################### -# Let's start by importing the necessary modules. +# Import the necessary modules. from ansys.dpf import core as dpf from ansys.dpf.core import examples ############################################################################### -# Then we can load the simulation results from a .rst file. +# Load the simulation results from an RST file. analysis = examples.download_crankshaft() ############################################################################### -# First case: applying the averaging operator after computing the equivalent stresses -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# Here we are going to define a function that computes the Von -# Mises stresses in the crankshaft and then applies the desired averaging operator. - +# Create the first workflow +# ~~~~~~~~~~~~~~~~~~~~~~~~~ +# The first workflow applies the averaging operator after computing the equivalent +# stresses. To create it, define a function that computes the von Mises stresses +# in the crankshaft and then apply the averaging operator. def compute_von_mises_then_average(analysis): - # First we create a model from the results of the simulation and retrieve its mesh + # Create a model from the results of the simulation and retrieve its mesh model = dpf.Model(analysis) mesh = model.metadata.meshed_region - # Then we apply the stress operator to obtain the stresses in the body + # Apply the stress operator to obtain the stresses in the body stress_op = dpf.operators.result.stress() stress_op.inputs.connect(model) stresses = stress_op.outputs.fields_container() - # Here we compute the Von Mises stresses + # Compute the von Mises stresses vm_op = dpf.operators.invariant.von_mises_eqv() vm_op.inputs.field.connect(stresses) von_mises = vm_op.outputs.field() - # Finally, we apply the averaging operator to the Von Mises stresses + # Apply the averaging operator to the von Mises stresses avg_op = dpf.operators.averaging.elemental_nodal_to_nodal() avg_op.inputs.connect(von_mises) avg_von_mises = avg_op.outputs.field() - # Aditionally we find the maximum value of the Von Mises stress field + # Find the maximum value of the von Mises stress field min_max = dpf.operators.min_max.min_max() min_max.inputs.field.connect(avg_von_mises) max_val = min_max.outputs.field_max() @@ -101,11 +103,12 @@ def compute_von_mises_then_average(analysis): ############################################################################### -# Second case: computing the equivalent stresses after applying the averaging operator -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# This time, the function we are going to create will firstly apply the averaging operator -# to the stress field in the crankshaft and only then calculate the Von Mises stresses, that -# will be already located on a Nodal position. +# Create the second workflow +# ~~~~~~~~~~~~~~~~~~~~~~~~~~ +# The second workflow computes the equivalent stresses after applying the averaging +# operator. To create this workflow, first apply the averaging operator to the +# stress field in the crankshaft and then calculate the von Mises stresses, which +# are already located on a ``Nodal`` position. def average_then_compute_von_mises(analysis): @@ -140,13 +143,13 @@ def average_then_compute_von_mises(analysis): ############################################################################### -# Plotting the results -# ~~~~~~~~~~~~~~~~~~~~ -# Finally, we can plot both Von Mises stress fields side -# by side to see how they compare to each other. The first image -# displays the results when the equivalent stresses are calculated -# first, while the second one shows the case when the averaging is -# done first. +# Plot the results +# ~~~~~~~~~~~~~~~~ +# Plot both von Mises stress fields side by side to compare them. +# - The first plot displays the results when the equivalent stresses are calculated +# first. +# - The second plot shows the results when the averaging is done first. +# max1 = compute_von_mises_then_average(analysis) max2 = average_then_compute_von_mises(analysis) @@ -160,5 +163,6 @@ def average_then_compute_von_mises(analysis): the averaging is done after the calculations.".format(diff)) ############################################################################### -# As we can see, even though both workflows apply the same steps to the same initial data, -# their final results are different because of the order in which the operators are applied. \ No newline at end of file +# Even though both workflows apply the same steps to the same initial data, +# their final results are different because of the order in which the operators +# are applied. \ No newline at end of file diff --git a/examples/08-averaging/01-average_across_bodies.py b/examples/08-averaging/01-average_across_bodies.py index 83e9bac890..aac3749ee6 100644 --- a/examples/08-averaging/01-average_across_bodies.py +++ b/examples/08-averaging/01-average_across_bodies.py @@ -3,32 +3,31 @@ Average across bodies ~~~~~~~~~~~~~~~~~~~~~ -This example is aimed towards explaining how to activate or deactivate the averaging -across bodies option in DPF. When we have a multibody simulation that involves the -calculation of ElementalNodal fields, like stresses or strains, we can either -activate or deactivate the option of averaging theses fields across the different -bodies when they share common nodes. This will likely change the end results that are -displayed after the post processing of the simulation, as we will see below. +This example shows how to activate and deactivate the DPF option for averaging +across bodies. When a multi-body simulation calculates ``ElementalNodal`` fields, +like stresses or strains, you can either activate or deactivate the averaging +of theses fields across the different bodies when they share common nodes. This +likely changes the end results that are shown after postprocessing of the simulation. """ ############################################################################### -# Let's start by importing the necessary modules. +# Perform the required imports. from ansys.dpf import core as dpf from ansys.dpf.core import operators as ops from ansys.dpf.core import examples ############################################################################### -# Then we can load the simulation results from a .rst file and create a model of it. +# Load the simulation results from an RST file and create a model of it. analysis = examples.download_piston_rod() model = dpf.Model(analysis) print(model) ############################################################################### -# Now, let's take a look at our system to see how our bodies are connected to -# each other. First, we extract the mesh of our model and then we divide it into -# different meshes using the split_mesh operator. +# To take a look at the system to see how bodies are connected to each other, +# extract the mesh of the model and then divide it into different meshes +# using the ``split_mesh`` operator. mesh = model.metadata.meshed_region split_mesh_op = ops.mesh.split_mesh(mesh=mesh, property="mat") @@ -37,20 +36,17 @@ meshes.plot(text="Body meshes") ############################################################################### -# As we can see in the image above, even though the piston rod is one single part, -# it's composed of two different bodies. Additionally, we can observe that the region +# As you can see in the preceding image, even though the piston rod is one single part, +# it is composed of two different bodies. Additionally, you can see that the region # where the two bodies are bonded together contains nodes that are common between them. -############################################################################### -# Now, let's take a look into how the averaging across bodies option alters the -# results of a simulation. - ############################################################################### # Averaging across bodies with DPF # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# Let's define two workflows. The first one does averaging across bodies, while the -# second one doesn't. The variable of interest here is the stress in the Z direction, -# which will be obtained using the "stress_Z" operator. +# To take a look at how the option for averaging across bodies alters the results +# of the simulation, define two workflows. The first workflow does averaging across +# bodies, while the second workflow does not. The variable of interest is the stress +# in the Z direction, which is obtained using the "stress_Z" operator. # %% # .. graphviz:: @@ -87,23 +83,22 @@ ############################################################################### # Averaging across bodies activated # --------------------------------- -# The extraction of the stresses in the Z direction in DPF applies by default averaging -# across bodies. Therefore, a simple workflow like the one shown below can be used -# in this case. +# The extraction of the stresses in the Z direction applies averaging +# across bodies by default. Thus, you can use a simple workflow. def average_across_bodies(analysis): - # This function will extract the stresses in the Z direction (with the average + # Extract the stresses in the Z direction (with the average # across bodies property activated) and plot them. # Create a model from the simulation results. model = dpf.Model(analysis) mesh = model.metadata.meshed_region - # We're interested in the last time set, so: + # Set the time set of interest to the last time set. time_set = 3 - # Extracting the stresses in the Z direction. By default, DPF already applies + # Extract the stresses in the Z direction. By default, DPF already applies # averaging across bodies when extracting the stresses. stress_op = ops.result.stress_Z() stress_op.inputs.connect(model) @@ -111,7 +106,7 @@ def average_across_bodies(analysis): stress_op.inputs.requested_location.connect(dpf.locations.nodal) stresses = stress_op.outputs.fields_container() - # Finding the maximum stress value + # Find the maximum stress value. min_max = dpf.operators.min_max.min_max_fc() min_max.inputs.fields_container.connect(stresses) max_val = min_max.outputs.field_max() @@ -124,28 +119,29 @@ def average_across_bodies(analysis): ############################################################################### # Averaging across bodies deactivated # ----------------------------------- -# To extract the stresses without averaging across the bodies of the simulated -# part, the workflow is a bit more complicated. So, instead of being presented -# as a function, it will be broken into various parts with explanations of what -# is being done. +# The workflow is more complicated for extracting the stresses without +# averaging across the bodies of the simulated part. Instead of presenting +# the workflow as a function, it is broken into various parts with explanations +# of what is being done. ############################################################################### -# First, we create a model from the simulation results and extract its mesh and -# step informations. +# Create a model from the simulation results and extract its mesh and +# step information. model = dpf.Model(analysis) mesh = model.metadata.meshed_region time_freq = model.metadata.time_freq_support time_sets = time_freq.time_frequencies.data.tolist() ############################################################################### -# We need to split the meshes of the two bodies so we can then create separate -# scopings for each one of them. The 'mat' label is used to split the mesh by bodies. +# Split the meshes of the two bodies so that separate scopings can be +# created for each one of them. The `'mat'`` label is used to split the mesh +# by bodies. mesh_scop_op = ops.scoping.split_on_property_type(mesh=mesh, label1="mat") mesh_scop_cont = mesh_scop_op.outputs.mesh_scoping() ############################################################################### -# Then, as we have 3 different time steps, we need to create a ScopingsContainer -# that contains the meshes of each one of these steps. We do so as follows: +# Given that there are three different time steps, create a scopings container +# that contains the meshes of each of these time steps. scop_cont = dpf.ScopingsContainer() scop_cont.add_label("body") @@ -160,15 +156,16 @@ def average_across_bodies(analysis): print(scop_cont) ############################################################################### -# As we can see, we've got 6 different Scopings inside our ScopingsContainer, one for -# each body over each one of the three time steps. Let's now focus our analysis on the -# last time set: +# The scopings container has six different scopings, one for each body over +# each of the three time steps. +# +# Set the time set of interest to the last time set. time_set = 3 ############################################################################### -# Then, to retrieve the Z stresses without averaging across the two bodies, we can pass -# a ScopingsContainer that contains their respective meshes as a parameter to the -# stress_Z operator. To be able to do that, we need a new ScopingsContainer that contains +# To retrieve the Z stresses without averaging across the two bodies, you must +# pass a scopings container that contains their respective meshes as a parameter +# to the ``stress_Z`` operator. To do this, create a scopings container that contains # the meshes of the two bodies in the desired time step. scop_list = scop_cont.get_scopings(label_space={"time": time_set}) @@ -180,11 +177,10 @@ def average_across_bodies(analysis): body += 1 print(scopings) ############################################################################### -# We can see that, in this container, we only have two Scopings, one for each body -# in the last time step, as desired. +# This contain has only two scopings, one for each body in the last time step. ############################################################################### -# Finally, we can extract the stresses in the Z direction. +# Extract the stresses in the Z direction. stress_op = ops.result.stress_Z() stress_op.inputs.connect(model) @@ -196,17 +192,17 @@ def average_across_bodies(analysis): stresses = stress_op.outputs.fields_container() print(stresses) ############################################################################### -# Additionally, we can find the maximum value of the stress field for comparison purposes. +# Find the maximum value of the stress field for comparison purposes. min_max = dpf.operators.min_max.min_max_fc() min_max.inputs.fields_container.connect(stresses) max_val = min_max.outputs.field_max() ############################################################################### -# We can also define the workflow presented above as a function: +# Define the preceding workflow as a function: def not_average_across_bodies(analysis): - # This function will extract the stresses in the Z direction (with the average + # This function extracts the stresses in the Z direction (with the average # across bodies option deactivated) and plot them. model = dpf.Model(analysis) @@ -256,11 +252,11 @@ def not_average_across_bodies(analysis): ############################################################################### -# Plotting the results -# ~~~~~~~~~~~~~~~~~~~~ -# Finally, let's plot the results to see how they compare. In the first image, we have -# the stress distribution when the averaging across bodies options is activated, while -# in the second one it's deactivated. +# Plot and compare the results +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Plot and compare the results. The first plot shows the stress distribution +# when averaging across bodies is activated. The second plot shows the stress +# distribution when averaging across bodies is deactivated. max_avg_on = average_across_bodies(analysis) max_avg_off = not_average_across_bodies(analysis) diff --git a/examples/08-averaging/README.txt b/examples/08-averaging/README.txt index a5c7e69f69..60290c4205 100644 --- a/examples/08-averaging/README.txt +++ b/examples/08-averaging/README.txt @@ -1,5 +1,5 @@ .. _averaging_examples -Averaging Examples +Averaging examples ================== -These demos showcase the use of some of the averaging operators of DPF. \ No newline at end of file +These examples show how to use some of DPF's averaging operators. \ No newline at end of file