From d92e766481b189349de45162745fa1994ba1f937 Mon Sep 17 00:00:00 2001
From: Kathy Pippert <84872299+PipKat@users.noreply.github.com>
Date: Tue, 10 Jan 2023 11:56:55 -0500
Subject: [PATCH] Edits to DPF-Core RST and TXT files (#729)
* Edits to DPF-Core RST and TXT files
* Fix typo
* Apply suggestions from code review
Incorporate Paul's comments
Co-authored-by: PProfizi <100710998+PProfizi@users.noreply.github.com>
Co-authored-by: PProfizi <100710998+PProfizi@users.noreply.github.com>
---
docs/source/concepts/concepts.rst | 20 +--
docs/source/concepts/index.rst | 6 +-
docs/source/concepts/stepbystep.rst | 27 ++--
docs/source/concepts/waysofusing.rst | 2 +-
docs/source/contributing.rst | 8 +-
docs/source/getting_started/compatibility.rst | 12 +-
docs/source/getting_started/dependencies.rst | 8 +-
docs/source/getting_started/index.rst | 56 ++++---
docs/source/getting_started/install.rst | 14 +-
docs/source/index.rst | 9 +-
docs/source/user_guide/custom_operators.rst | 19 +--
.../user_guide/custom_operators_deps.rst | 2 +-
docs/source/user_guide/fields_container.rst | 29 ++--
.../getting_started_with_dpf_server.rst | 150 ++++++++++--------
docs/source/user_guide/how_to.rst | 2 +-
docs/source/user_guide/index.rst | 10 +-
docs/source/user_guide/main_entities.rst | 4 +-
docs/source/user_guide/model.rst | 4 +-
docs/source/user_guide/operators.rst | 23 +--
docs/source/user_guide/server_context.rst | 64 ++++----
docs/source/user_guide/server_types.rst | 70 ++++----
docs/source/user_guide/troubleshooting.rst | 8 +-
docs/source/user_guide/xmlfiles.rst | 20 +--
examples/02-modal-harmonic/README.txt | 2 +-
examples/03-advanced/README.txt | 2 +-
examples/04-file-IO/README.txt | 2 +-
examples/05-plotting/README.txt | 2 +-
27 files changed, 289 insertions(+), 286 deletions(-)
diff --git a/docs/source/concepts/concepts.rst b/docs/source/concepts/concepts.rst
index e27ee7dfca..ccfd2f9914 100644
--- a/docs/source/concepts/concepts.rst
+++ b/docs/source/concepts/concepts.rst
@@ -3,7 +3,7 @@
==================
Terms and concepts
==================
-DPF sees *fields of data*, not physical results. This makes DPF a
+DPF sees **fields of data**, not physical results. This makes DPF a
very versatile tool that can be used across teams, projects, and
simulations.
@@ -20,7 +20,7 @@ Here are descriptions for key DPF terms:
uses three different spatial locations for finite element data: ``Nodal``,
``Elemental``, and ``ElementalNodal``.
- **Operators:** Objects that are used to create and transform the data.
- An operator is composed of a *core* and *pins*. The core handles the
+ An operator is composed of a **core** and **pins**. The core handles the
calculation, and the pins provide input data to and output data from
the operator.
- **Scoping:** Spatial and/or temporal subset of a model's support.
@@ -32,8 +32,8 @@ Here are descriptions for key DPF terms:
Scoping
-------
In most cases, you do not want to work with an entire set of data
-but rather with a subset of this data. To achieve this, you define
-a *scoping*, which is a subset of the model's support.
+but rather with a subset. To achieve this, you define
+a **scoping**, which is a subset of the model's support.
Typically, scoping can represent node IDs, element IDs, time steps,
frequencies, and joints. Scoping describes a spatial and/or temporal
subset that the field is scoped on.
@@ -41,10 +41,10 @@ subset that the field is scoped on.
Field data
----------
In DPF, field data is always associated with its scoping and support, making
-the *field* a self-describing piece of data. For example, in a field of nodal
-displacement, the *displacement* is the simulation data, and the associated
-*nodes* are the scoping. A field can also be defined by its dimensionality,
-unit of data, and *location*.
+the **field** a self-describing piece of data. For example, in a field of nodal
+displacement, the **displacement** is the simulation data, and the associated
+**nodes** are the scoping. A field can also be defined by its dimensionality,
+unit of data, and **location**.
Location
--------
@@ -58,7 +58,7 @@ finite element data, the location is one of three spatial locations: ``Nodal``,
is identified by an ID, which is typically an element number.
- An ``ElementalNodal`` location describes data defined on the nodes of the elements.
To retrieve an elemental node, you must use the ID for the element. To achieve
- this, you define an *elemental scoping* or *nodal scoping*.
+ this, you define an elemental scoping or nodal scoping.
Concept summary
---------------
@@ -80,7 +80,7 @@ You use :ref:`ref_dpf_operators_reference` to create and transform the data. An
Workflows
---------
-You can chain operators together to create a *workflow*, which is a global entity
+You can chain operators together to create a **workflow**, which is a global entity
that you use to evaluate data produced by operators. A workflow requires inputs
to operators, which computes requested outputs.
diff --git a/docs/source/concepts/index.rst b/docs/source/concepts/index.rst
index 8fcd7568b8..ebaaf9257a 100644
--- a/docs/source/concepts/index.rst
+++ b/docs/source/concepts/index.rst
@@ -4,11 +4,7 @@
Concepts
========
-This section gives in depth descriptions and explanations of DPF concepts, including terminology.
-
-Other sections of this guide include :ref:`ref_user_guide`, :ref:`ref_api_section`,
-:ref:`ref_dpf_operators_reference`, and :ref:`gallery`.
-
+This section provides in-depth descriptions and explanations of DPF concepts, including terminology.
DPF concepts
~~~~~~~~~~~~
diff --git a/docs/source/concepts/stepbystep.rst b/docs/source/concepts/stepbystep.rst
index 5d395897c0..1c72c53db8 100644
--- a/docs/source/concepts/stepbystep.rst
+++ b/docs/source/concepts/stepbystep.rst
@@ -22,7 +22,7 @@ Data can come from two sources:
defining where the result files are located.
- **Manual input in DPF:** You can create fields of data in DPF.
-Once you have specify data sources or manually create fields in PDF,
+Once you specify data sources or manually create fields in DPF,
you can create field containers (if applicable) and define scopings to
identify the subset of data that you want to evaluate.
@@ -31,7 +31,7 @@ Specify the data source
To evaluate the data in simulation result files, you specify the data source by defining
where the results files are located.
-This example shows how to define the data source:
+This code shows how to define the data source:
.. code-block:: python
@@ -42,7 +42,7 @@ This example shows how to define the data source:
['/tmp/file.rst']
To evaluate data files, they must be opened. To open data files, you
-define *streams*. A stream is an entity that contains the data sources.
+define **streams**. A stream is an entity that contains the data sources.
Streams keep the data files open and keep some data cached to make the next
evaluation faster. Streams are particularly convenient when using large
data files. They save time when opening and closing data files. When a stream
@@ -50,7 +50,7 @@ is released, the data files are closed.
Define fields
~~~~~~~~~~~~~
-A *field* is a container of simulation data. In numerical simulations,
+A **field** is a container of simulation data. In numerical simulations,
result data is defined by values associated with entities:
.. image:: ../images/drawings/values-entities.png
@@ -59,7 +59,7 @@ Therefore, a field of data might look something like this:
.. image:: ../images/drawings/field.png
-This example shows how to define a field from scratch:
+This code shows how to define a field from scratch:
.. code-block:: python
@@ -87,7 +87,7 @@ You specify the set of entities by defining a range of IDs:
You must define a scoping prior to its use in the transformation data workflow.
-This example shows how to define a mesh scoping:
+This code shows how to define a mesh scoping:
.. code-block:: python
@@ -105,7 +105,7 @@ This example shows how to define a mesh scoping:
Define field containers
~~~~~~~~~~~~~~~~~~~~~~~
-A *field container* holds a set of fields. It is used mainly for
+A **field container** holds a set of fields. It is used mainly for
transient, harmonic, modal, or multi-step analyses. This image
explains its structure:
@@ -123,7 +123,7 @@ You can define a field container in multiple ways:
- Create a field container from a CSV file.
- Convert existing fields to a field container.
-This example shows how to define a field container from scratch:
+This code shows how to define a field container from scratch:
.. code-block:: python
@@ -165,7 +165,8 @@ an output that it passes to a field or field container using an output pin.
.. image:: ../images/drawings/circuit.png
Comprehensive information on operators is available in :ref:`ref_dpf_operators_reference`.
-In the **Available Operators** area, you can either type a keyword in the **Search** option
+In the **Available Operators** area for either the **Entry** or **Premium** operators,
+you can either type a keyword in the **Search** option
or browse by operator categories:
.. image:: ../images/drawings/help-operators.png
@@ -186,7 +187,7 @@ language (IronPython, CPython, and C++).
.. image:: ../images/drawings/operator-def.png
-This example shows how to define an operator from a model:
+This code shows how to define an operator from a model:
.. code-block:: python
@@ -203,15 +204,15 @@ data transformation workflow, enabling you to perform all operations necessary
to get the result that you want.
In a workflow, the output pins of one operator can be connected to the input pins
-of another operator, allowing output data from one operator to be passed as
-input to another operator.
+of another operator, allowing the output from one operator to be passed as
+the input to another operator.
This image shows how you would get the norm of a resulting vector from the
dot product of two vectors:
.. image:: ../images/drawings/connect-operators.png
-This example shows how to define a generic workflow that computes the minimum
+This code shows how to define a generic workflow that computes the minimum
of displacement by chaining the ``U`` and ``min_max_fc`` operators:
.. code-block:: python
diff --git a/docs/source/concepts/waysofusing.rst b/docs/source/concepts/waysofusing.rst
index 22fe934094..2f6e476555 100644
--- a/docs/source/concepts/waysofusing.rst
+++ b/docs/source/concepts/waysofusing.rst
@@ -13,7 +13,7 @@ CPython
Standalone DPF uses CPython and can be accessed with any Python console.
Data can be exported to universal file formats, such as VTK, HDF5, and TXT
files. You can use it to generate TH-plots, screenshots, and animations or
-to create custom result plots using `numpy `_
+to create custom result plots using the `numpy `_
and `matplotlib `_ packages.
.. image:: ../images/drawings/dpf-reports.png
diff --git a/docs/source/contributing.rst b/docs/source/contributing.rst
index 8e521e73f7..11845ca748 100644
--- a/docs/source/contributing.rst
+++ b/docs/source/contributing.rst
@@ -7,16 +7,14 @@ Contribute
Overall guidance on contributing to a PyAnsys repository appears in
`Contribute `_
in the *PyAnsys Developer's Guide*. Ensure that you are thoroughly familiar
-with this guide, paying particular attention to `Guidelines and Best Practices
-`_, before attempting
-to contribute to PyDPF-Core.
+with this guide before attempting to contribute to PyDPF-Core.
The following contribution information is specific to PyDPF-Core.
Clone the repository
--------------------
-To clone and install the latest version of PyDPF-Core in
-development mode, run:
+Clone and install the latest version of PyDPF-Core in
+development mode by running this code:
.. code::
diff --git a/docs/source/getting_started/compatibility.rst b/docs/source/getting_started/compatibility.rst
index c63b330e5d..a05fbe8c72 100644
--- a/docs/source/getting_started/compatibility.rst
+++ b/docs/source/getting_started/compatibility.rst
@@ -65,19 +65,19 @@ should also be synchronized with the server version.
- 0.2.2
- 0.2.*
-(** compatibility of DPF 2.0 with ansys-dpf-core 0.5.0 and later is assumed but no longer certified)
+(** Compatibility of DPF 2.0 with ansys-dpf-core 0.5.0 and later is assumed but no longer certified.)
-Updating Python environment
----------------------------
+Update Python environment
+-------------------------
When moving from one Ansys release to another, you must update the ``ansys-dpf-core`` package and its dependencies.
-To get the latest version of the ``ansys-dpf-core`` package, use this code:
+To get the latest version of the ``ansys-dpf-core`` package, use this command:
.. code::
pip install --upgrade --force-reinstall ansys-dpf-core
-To get a specific version of the ``ansys-dpf-core`` package, such as 0.7.0, use this code:
+To get a specific version of the ``ansys-dpf-core`` package, such as 0.7.0, use this command:
.. code::
@@ -88,7 +88,7 @@ To get a specific version of the ``ansys-dpf-core`` package, such as 0.7.0, use
Environment variable
--------------------
-The ``start_local_server`` method uses the ``Ans.Dpf.Grpc.bat`` file or
+The ``start_local_server()`` method uses the ``Ans.Dpf.Grpc.bat`` file or
``Ans.Dpf.Grpc.sh`` file to start the server. Ensure that the ``AWP_ROOT{VER}``
environment variable is set to your installed Ansys version. For example, if Ansys
2022 R2 is installed, ensure that the ``AWP_ROOT222`` environment
diff --git a/docs/source/getting_started/dependencies.rst b/docs/source/getting_started/dependencies.rst
index f40706fa52..1e6f8f53de 100644
--- a/docs/source/getting_started/dependencies.rst
+++ b/docs/source/getting_started/dependencies.rst
@@ -7,8 +7,8 @@ Dependencies
Package dependencies
--------------------
-PyDPF-Core dependencies are automatically checked when packages are
-installed. Package dependencies follow:
+Dependencies for the ``ansys-dpf-core`` package are automatically checked when the
+package is installed. Package dependencies follow:
- `ansys.dpf.gate `_, which is the gate
to the DPF C API or Python gRPC API. The gate depends on the server configuration:
@@ -28,5 +28,5 @@ Optional dependencies
For plotting, you can install these optional Python packages:
-- `matplotlib `_ for chart plotting
-- `pyvista `_ for 3D plotting
+- `matplotlib `_ package for chart plotting
+- `pyvista `_ package for 3D plotting
diff --git a/docs/source/getting_started/index.rst b/docs/source/getting_started/index.rst
index b232327115..98443d912b 100755
--- a/docs/source/getting_started/index.rst
+++ b/docs/source/getting_started/index.rst
@@ -14,10 +14,10 @@ PyDPF-Core is a Python client API communicating with a **DPF Server**, either
through the network using gRPC or directly in the same process.
-Installing PyDPF-Core
----------------------
+Install PyDPF-Core
+------------------
-In a Python environment, run the following command to install PyDPF-Core:
+To install PyDPF-Core, in a Python environment, run this command:
.. code::
@@ -26,54 +26,52 @@ In a Python environment, run the following command to install PyDPF-Core:
For more installation options, see :ref:`Installation section `.
-Installing DPF Server
----------------------
+Install DPF Server
+------------------
-#. DPF Server is packaged within the **Ansys Unified Installer** starting with Ansys 2021 R1.
- To use it, download the standard installation using your preferred distribution channel,
- and install Ansys following the installer instructions. If you experience problems,
- see :ref:`Environment variable section `. For information on getting
- a licensed copy of Ansys, visit the `Ansys website `_.
+* DPF Server is packaged within the **Ansys installer** in Ansys 2021 R1 and later.
+ To use it, download the standard installation using your preferred distribution channel,
+ and install Ansys following the installer instructions. If you experience problems,
+ see :ref:`Environment variable `. For information on getting
+ a licensed copy of Ansys, visit the `Ansys website `_.
-#. DPF Server is available as a **standalone** package (independent of the Ansys installer) on the
- `DPF Pre-Release page of the Ansys Customer Portal `_.
- As explained in :ref:`Ansys licensing section `,
- DPF Server is protected by an Ansys license mechanism. Once you have access to an
- Ansys license, install DPF Server:
+* DPF Server is available as a **standalone** package (independent of the Ansys installer) on the
+ `DPF Pre-Release page `_ of the Ansys Customer Portal.
+ As explained in :ref:`Ansys licensing `,
+ DPF Server is protected by an Ansys license mechanism. Once you have access to an
+ Ansys license, install DPF Server:
.. card::
- * Download the ansys_dpf_server_win_v2023.2.pre0.zip or ansys_dpf_server_lin_v2023.2.pre0.zip
+ * Download the ``ansys_dpf_server_win_v2023.2.pre0.zip`` or ``ansys_dpf_server_lin_v2023.2.pre0.zip``
file as appropriate.
- * Unzip the package and go to the root folder of the unzipped package
- (ansys_dpf_server_win_v2023.2.pre0 or ansys_dpf_server_lin_v2023.2.pre0).
- * In a Python environment, run the following command:
+ * Unzip the package and go to its root folder (``ansys_dpf_server_win_v2023.2.pre0`` or
+ ``ansys_dpf_server_lin_v2023.2.pre0``).
+ * In a Python environment, run this command:
.. code::
pip install -e .
* DPF Server is protected using the license terms specified in the
- `DPFPreviewLicenseAgreement `_ file, which is available on the
- `DPF Pre-Release page of the Ansys Customer Portal `_.
- To accept these terms, you must set the
- following environment variable:
+ `DPFPreviewLicenseAgreement `_
+ file, which is available on the `DPF Pre-Release page `_
+ of the Ansys Customer Portal. To accept these terms, you must set this
+ environment variable:
.. code::
ANSYS_DPF_ACCEPT_LA=Y
-For more information about the license terms, see the :ref:`DPF Preview License Agreement`
-section.
-
-For installation methods that do not use pip, such as using **Docker containers**, see
-:ref:`ref_getting_started_with_dpf_server`.
+For more information about the license terms, see :ref:`DPF Preview License Agreement`.
+For installation methods that do not use `pip `_,
+such as using **Docker containers**, see :ref:`ref_getting_started_with_dpf_server`.
Use PyDPF-Core
--------------
-In the same Python environment, run the following command to use PyDPF-Core:
+To use PyDPF-Core, in the same Python environment, run this command:
.. code:: python
diff --git a/docs/source/getting_started/install.rst b/docs/source/getting_started/install.rst
index 2aa8fc617f..8095fd6633 100644
--- a/docs/source/getting_started/install.rst
+++ b/docs/source/getting_started/install.rst
@@ -7,10 +7,10 @@ Installation
Install using ``pip``
---------------------
-`pip `_ is the package installer for Python.
+The standard package installer for Python is `pip `_.
To use PyDPF-Core with Ansys 2021 R2 or later, install the latest version
-with:
+with this command:
.. code::
@@ -18,7 +18,7 @@ with:
To use PyDPF-Core with Ansys 2021 R1, install the latest version
-with:
+with this command:
.. code::
@@ -36,7 +36,7 @@ GitHub `_ or
Install for a quick tryout
--------------------------
-For a quick tryout, use:
+For a quick tryout, install PyDPF-Core with this code:
.. code::
@@ -61,11 +61,11 @@ development flag:
Install with plotting capabilities
----------------------------------
-PyDPF-Core plotting capabilities are based on PyVista. That means that PyVista must be installed with PyDPF-Core.
-To proceed, use:
+PyDPF-Core plotting capabilities are based on `PyVista `_.
+This means that PyVista must be installed with PyDPF-Core. To proceed, use this command:
.. code::
pip install ansys-dpf-core[plotting]
-For more information about PyDPF-Core plotting capabilities, see :ref:`ref_plotter`.
+For more information about PyDPF-Core plotting capabilities, see :ref:`_user_guide_plotting`.
diff --git a/docs/source/index.rst b/docs/source/index.rst
index f65035dfc4..f8d01f3d7f 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -18,7 +18,7 @@ or complex data-processing workflows that you can reuse for repeated or
future evaluations.
The data in DPF is defined based on physics-agnostic mathematical quantities
-described in self-sufficient entities called *fields*. This allows DPF to be
+described in self-sufficient entities called **fields**. This allows DPF to be
a modular and easy-to-use tool with a large range of capabilities.
.. image:: images/drawings/dpf-flow.png
@@ -50,7 +50,7 @@ Here is how you plot displacement results:
>>> disp = model.results.displacement().X()
>>> model.metadata.meshed_region.plot(disp.outputs.fields_container())
-For comprehensive demos, see :ref:`gallery`.
+For comprehensive examples of how you use PyDPF-Core, see :ref:`gallery`.
Key features
@@ -67,10 +67,11 @@ DPF is physics-agnostic, which means that its use is not limited to a particular
field, physics solution, or file format.
**Extensibility and customization**
+
DPF is developed around two core entities:
-- Data represented as a *field*
-- An *operator* to act upon this data
+- Data represented as a **field**
+- An **operator** to act upon this data
Each DPF capability is developed through operators that allow for componentization
of the framework. Because DPF is plugin-based, new features or formats can be easily added.
diff --git a/docs/source/user_guide/custom_operators.rst b/docs/source/user_guide/custom_operators.rst
index 6da5eeb04a..8dd4478102 100644
--- a/docs/source/user_guide/custom_operators.rst
+++ b/docs/source/user_guide/custom_operators.rst
@@ -5,7 +5,7 @@ Custom operators
================
In Ansys 2022 R2 and later, you can create custom operators in CPython. Creating custom operators
-consists of wrapping Python routines in a DPF-compliant way so that you can them in the same way
+consists of wrapping Python routines in a DPF-compliant way so that you can access them in the same way
as you access the native operators in the :class:`ansys.dpf.core.dpf_operator.Operator` class in
PyDPF-Core or in any supported client API.
@@ -24,7 +24,7 @@ With support for custom operators, PyDPF-Core becomes a development tool offerin
- **Remotable and parallel computing:** Native DPF capabilities are inherited by custom operators.
The only prerequisite for creating custom operators is to be familiar with native operators.
-For more information, see (:ref:`ref_user_guide_operators`).
+For more information, see :ref:`ref_user_guide_operators`.
Install module
--------------
@@ -34,8 +34,8 @@ installer's Python interpreter.
#. Download the script for you operating system:
- - For Windows, download this :download:`powershell script `.
- - For Linux, download this :download:`shell script `
+ - For Windows, download this :download:`PowerShell script `.
+ - For Linux, download this :download:`Shell script `
#. Run the downloaded script for installing with optional arguments:
@@ -48,8 +48,8 @@ If you ever want to uninstall the ``ansys-dpf-core`` module from the Ansys insta
#. Download the script for your operating system:
- - For Windows, download this :download:`powershell script `.
- - For Linux, download this :download:`shell script `.
+ - For Windows, download this :download:`PowerShell script `.
+ - For Linux, download this :download:`Shell script `.
3. Run the downloaded script for uninstalling with the optional argument:
@@ -202,8 +202,8 @@ The ``requirements.txt`` file contains code like this:
The ZIP files for Windows and Linux are included as assets:
-- winx64.zip
-- linx64.zip
+- ``winx64.zip``
+- ``linx64.zip``
**custom_plugin.xml file**
@@ -247,4 +247,5 @@ Once the plugin is loaded, you can instantiate the custom operator:
References
----------
-See the API reference at :ref:`ref_custom_operator` and examples of Custom Operators implementations in :ref:`python_operators`.
+For more information, see :ref:`ref_custom_operator` in the **API reference**
+and :ref:`python_operators` in **Examples**.
diff --git a/docs/source/user_guide/custom_operators_deps.rst b/docs/source/user_guide/custom_operators_deps.rst
index 91befdc52c..37a89060fa 100644
--- a/docs/source/user_guide/custom_operators_deps.rst
+++ b/docs/source/user_guide/custom_operators_deps.rst
@@ -29,7 +29,7 @@ For this approach, do the following:
3. Run the downloaded script with the mandatory arguments:
- ``-pluginpath``: Path to the folder with the plug-in package.
- - ``-zippath``: Path and name for ZIP file.
+ - ``-zippath``: Path and name for the ZIP file.
Optional arguments are:
diff --git a/docs/source/user_guide/fields_container.rst b/docs/source/user_guide/fields_container.rst
index 7203bef123..02523fbfeb 100644
--- a/docs/source/user_guide/fields_container.rst
+++ b/docs/source/user_guide/fields_container.rst
@@ -11,7 +11,7 @@ that hold data.
Access a fields container or field
-----------------------------------
The outputs from operators can be either a
-:class:`ansys.dpf.core.fields_container.FieldsContainer` class or a
+:class:`ansys.dpf.core.fields_container.FieldsContainer` or
:class:`ansys.dpf.core.field.Field` class.
A fields container is the DPF equivalent of a list of fields. It holds a
@@ -59,8 +59,8 @@ This example uses the ``elastic_strain`` operator to access a fields container:
- field 19 {time: 20} with ElementalNodal location, 6 components and 40 entities.
-Accessing fields within a fields container
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Access fields within a fields container
+---------------------------------------
Many methods are available for accessing a field in a fields
container. The preceding results contain a transient
result, which means that the fields container has one field
@@ -92,7 +92,7 @@ Access the field based on its time set ID:
field = fields.get_field_by_time_id(1)
To access fields for more complex requests, you can use the
-``get_field`` method with the ID of the requested field:
+``get_field()`` method with the ID of the requested field:
.. code-block::
@@ -183,8 +183,8 @@ indexing with ``fields[0]``, you can use zero-based indexing. When using
the ``get_fields()`` method to access results, you should base the request on
time-scoping set IDs.
-Field
------
+Field data
+----------
The :class:`ansys.dpf.core.field.Field` class is the fundamental unit of data within DPF.
It contains the actual data and its metadata, which is results data defined by values
associated with entities (scoping). These entities are a subset of a model (support).
@@ -219,7 +219,7 @@ The next section provides an overview of the metadata associated with the field
Field metadata
-~~~~~~~~~~~~~~
+--------------
A field contains the metadata for the result it is associated with. The metadata
includes the location (such as ``Elemental``, ``Nodal``, or
``ElementalNodal``) and the IDs associated with the location.
@@ -295,11 +295,8 @@ units of the data:
6
-Field data
-----------
-
Access field data
-~~~~~~~~~~~~~~~~~
+-----------------
When DPF-Core returns the :class:`ansys.dpf.core.field.Field` class,
what Python actually has is a client-side representation of the field,
not the entirety of the field itself. This means that all the data of
@@ -394,7 +391,7 @@ a field's data can be recovered locally before sending a large number of request
Operate on field data
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
Oftentimes, you do not need to directly act on the data of an array within
Python. For example, if you want to know the maximum of the data, you can
use the ``array.max()`` method to compute the maximum of the array with the
@@ -422,7 +419,7 @@ the field while returning the field:
[369, 1073, 1031, 1040, 2909, 2909]
-Here is an example of using the ``elemental_mean`` operator to compute the
+This example uses the ``elemental_mean`` operator to compute the
average of a field:
.. code-block::
@@ -445,6 +442,6 @@ average of a field:
For comprehensive information on chaining operators, see :ref:`ref_user_guide_operators`.
API reference
-~~~~~~~~~~~~~
-See the API reference at :ref:`ref_fields_container` and
-:ref:`ref_field`.
+-------------
+For more information, see :ref:`ref_fields_container` and
+:ref:`ref_field` in the **API reference**.
diff --git a/docs/source/user_guide/getting_started_with_dpf_server.rst b/docs/source/user_guide/getting_started_with_dpf_server.rst
index 0e8bd15b11..e560143f42 100644
--- a/docs/source/user_guide/getting_started_with_dpf_server.rst
+++ b/docs/source/user_guide/getting_started_with_dpf_server.rst
@@ -7,46 +7,52 @@ Getting started with DPF Server
What is DPF Server
------------------
-The Data Processing Framework (DPF) provides numerical simulation users and engineers with a toolbox for accessing and transforming
+DPF provides numerical simulation users and engineers with a toolbox for accessing and transforming
simulation data. With DPF, you can perform complex preprocessing or postprocessing of large amounts of simulation data within a
simulation workflow.
DPF Server is a package that contains all the necessary files to run the DPF Server, enabling DPF capabilities. It is available
-on the `DPF Pre-Release page of the Ansys Customer Portal `_. DPF Server first available version is 6.0 (2023 R2).
+on the `DPF Pre-Release page `_ of the Ansys Customer Portal.
+The first version of DPF Server is 6.0 (2023 R2).
-For more information about DPF and its use, see :ref:`ref_user_guide`.
+The sections on this page describe how to use DPF Server.
-The following section details how to use DPF Server package. For a quick start with DPF Server, see :ref:`ref_getting_started`.
+* For a quick start on DPF Server, see :ref:`ref_getting_started`.
+* For more information on DPF and its use, see :ref:`ref_user_guide`.
-Installing DPF Server
----------------------
+
+Install DPF Server
+------------------
.. _target_installing_server:
-#. Download the ansys_dpf_server_win_v2023.2.pre0.zip or ansys_dpf_server_lin_v2023.2.pre0.zip file as appropriate.
+#. Download the ``ansys_dpf_server_win_v2023.2.pre0.zip`` or ``ansys_dpf_server_lin_v2023.2.pre0.zip`` file as appropriate.
#. Unzip the package.
-#. Change to the root folder (ansys_dpf_server_win_v2023.2.pre0) of the unzipped package.
-#. In a Python environment, run the following command:
+#. Change to the root folder (``ansys_dpf_server_win_v2023.2.pre0``) of the unzipped package.
+#. In a Python environment, run this command:
.. code::
pip install -e .
-Using DPF Server
-----------------
+Use DPF Server
+--------------
-DPF Server use is protected using license terms. For more information, see the :ref:`DPF Preview License Agreement` section.
+DPF Server is protected using the license terms specified in the
+`DPFPreviewLicenseAgreement `_
+file, which is available on the `DPF Pre-Release page `_
+of the Ansys Customer Portal.
-Running the DPF Server with PyDPF
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Run DPF Server with PyDPF
+~~~~~~~~~~~~~~~~~~~~~~~~~
PyDPF-Core is a Python client API communicating with a **DPF Server**, either
through the network using gRPC or directly in the same process. PyDPF-Post is a Python
module for postprocessing based on PyDPF-Core.
-Both PyDPF-Core and PyDPF-Post python modules can be used with the DPF Server. The instructions to install and get started with PyDPF-Core
-can be found at `PyDPF-Core, Getting Started section `_. The instructions to install and get
-started with PyDPF-Post can be found at `PyDPF-Post, Getting Started section `_.
+Both PyDPF-Core and PyDPF-Post can be used with DPF Server. Installation instructions
+for PyDPF-Core are available in the PyDPF-Core `Getting started `_.
+Installation instructions for PyDPF-Post are available in the PyDPF-Post `Getting started `_.
With PyDPF-Core and PyDPF-Post, the first creation of most DPF entities starts a DPF Server with the current default configuration and context.
For example, the following code automatically starts a DPF Server behind the scenes:
@@ -56,7 +62,7 @@ For example, the following code automatically starts a DPF Server behind the sce
from ansys.dpf import core as dpf
data_sources = dpf.DataSources()
-With PyDPF-Core, you can also explicitly start a DPF Server using:
+With PyDPF-Core, you can also explicitly start a DPF Server using this code:
.. code::
@@ -67,18 +73,21 @@ To start a DPF Server from outside a Python environment, you can also use the ex
On Windows, start the DPF Server by running the ``Ans.Dpf.Grpc.bat`` file in the unzipped package.
On Linux, start the DPF Server by running the ``Ans.Dpf.Grpc.sh`` file in the unzipped package.
-Running DPF Server in a Docker container
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Run DPF Server in a Docker container
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+DPF server can be run in a Docker container.
-1. Along with the ansys_dpf_server_lin_v2023.2.pre0.zip archive mentioned in :ref:`Installing DPF Server `, download the ``Dockerfile``.
-2. Copy both the archive and ``Dockerfile`` in a folder and navigate into that folder.
-3. To build the DPF Docker container, run the following commands:
+#. Along with the ``ansys_dpf_server_lin_v2023.2.pre0.zip`` file mentioned earlier
+ in :ref:`Install DPF Server `, download the ``Dockerfile`` file.
+#. Copy both the ZIP file and ``Dockerfile`` file in a folder and navigate into that folder.
+#. To build the DPF Docker container, run the following command:
.. code::
docker build . -t dpf-core:v2023_2_pre0 --build-arg DPF_VERSION=232 --build-arg DPF_SERVER_FILE=ansys_dpf_server_lin_v2023.2.pre0.zip
-4. To run the DPF Docker container, see the :ref:`DPF Preview License Agreement` section.
+4. To run the DPF Docker container, license it. For more information, see'
+ :ref:`DPF Preview License Agreement`.
License terms
-------------
@@ -88,29 +97,31 @@ License terms
DPF Preview License Agreement
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-DPF Server use is protected using license terms specified in the `DPFPreviewLicenseAgreement `_ file that
-can be found on the `DPF Pre-Release page of the Ansys Customer Portal `_.
-``DPFPreviewLicenseAgreement`` is a text file and can be opened with a text editor, such as notepad.
+DPF Server is protected using license terms specified in the `DPFPreviewLicenseAgreement `_
+file that can be found on the `DPF Pre-Release page `_
+of the Ansys Customer Portal. The ``DPFPreviewLicenseAgreement`` file s a text file, which means that you can
+open it with a text editor, such as notepad.
-To accept the DPF User Licensing Agreement terms, the following environment variable must be set:
+To accept the terms of this license agreement, you must set the following environment variable:
.. code::
ANSYS_DPF_ACCEPT_LA=Y
-``ANSYS_DPF_ACCEPT_LA`` confirms your acceptance of the DPF User Licensing Agreement. By passing the value ``Y`` to the environment variable
-``ANSYS_DPF_ACCEPT_LA``, you are indicating that you have a valid and existing license for the edition and version of DPF Server you intend to use.
+The ``ANSYS_DPF_ACCEPT_LA`` environment variable confirms your acceptance of the DPF License Agreement.
+By passing the value ``Y`` to this environment variable, you are indicating that you have a valid and
+existing license for the edition and version of DPF Server that you intend to use.
-For a DPF Docker container usage, it can be set using:
+For DPF Docker container usage only, you can use the following code to set both the `ANSYS_DPF_ACCEPT_LA``
+and ``ANSYSLMD_LICENSE_FILE`` environment variables. For the ``ANSYSLMD_LICENSE_FILE`` environment variable,
+ensure that you replace ```` to point to the Ansys license server.
.. code::
docker run -e "ANSYS_DPF_ACCEPT_LA=Y" -e ANSYSLMD_LICENSE_FILE=1055@ -p 50052:50052 -e DOCKER_SERVER_PORT=50052 --expose=50052 dpf-core:v2023_2_pre0
-For any other case, set "ANSYS_DPF_ACCEPT_LA" as an environment variable with "Y" value.
-
-Replace "" mention that ANSYSLMD_LICENSE_FILE environment variable points to the Ansys license server.
-For more information about Ansys license mechanism use with DPF Server, see :ref:`Ansys licensing` section.
+The next section, :ref:`Ansys licensing`, provides information on
+the Ansys license mechanism that is used with DPF Server.
.. _target_to_ansys_license_mechanism:
@@ -118,14 +129,16 @@ For more information about Ansys license mechanism use with DPF Server, see :ref
Ansys licensing
~~~~~~~~~~~~~~~
-DPF Server is protected by Ansys licensing mechanism.
+DPF Server is protected by an Ansys licensing mechanism.
DPF capabilities are available through the following main contexts:
-- Entry: Loads the minimum number of plugins for basic use. It is the default. Checks if at least one increment exists
- from the following :ref:`Ansys licensing increments list`. This increment won't be blocked.
-- Premium: Loads the Entry and the Premium capabilities that require a license checkout. Blocks an increment from the
- following :ref:`Ansys licensing increments list`.
+- **Entry:** Loads the minimum number of plugins for basic use. This context, which is the default,
+ checks if at least one increment exists from the :ref:`Ansys licensing increments list`,
+ which follows. This increment won't be blocked.
+- **Premium:** Loads the **Entry** and the **Premium** capabilities that require a license checkout.
+ This context blocks an increment from the :ref:`Ansys licensing increments list`,
+ which follows.
To update the context, apply a new server context:
@@ -135,31 +148,32 @@ To update the context, apply a new server context:
.. _target_to_ansys_license_increments_list:
-The following Ansys licensing increments currently provide rights to use DPF Server:
-
-- ``preppost`` available in ``Ansys Mechanical Enterprise PrepPost`` product
-- ``meba`` available in ``ANSYS Mechanical Enterprise Solver`` product
-- ``mech_2`` available in ``ANSYS Mechanical Premium`` product
-- ``mech_1`` available in ``ANSYS Mechanical Pro`` product
-- ``ansys`` available in ``ANSYS Mechanical Enterprise`` product
-- ``dynapp`` available in ``ANSYS LS-DYNA PrepPost`` product
-- ``vmotion`` available in ``Ansys Motion`` product
-- ``acpreppost`` available in ``Ansys Mechanical Enterprise`` product
-- ``acdi_adprepost`` available in ``Ansys AUTODYN`` and ``Ansys AUTODYN PrepPost`` products
-- ``cfd_preppost`` available in ``Ansys CFD Enterprise`` product
-- ``cfd_preppost_pro`` available in ``Ansys CFD Enterprise`` product
-- ``vmotion_post`` available in ``Ansys Motion Post`` product
-- ``vmotion_pre`` available in ``Ansys Motion Pre`` product
-- ``advanced_meshing`` available in ``Ansys CFD Enterprise`` product
-- ``fluent_meshing_pro`` available in ``Ansys CFD Enterprise`` product
-- ``fluent_setup_post`` available in ``Ansys CFD Enterprise`` product
-- ``fluent_setup_post_pro`` available in ``Ansys CFD Enterprise`` product
-- ``acfx_pre`` available in ``Ansys CFD Enterprise`` product
-- ``cfd_base`` available in ``Ansys CFD Enterprise`` product
-- ``cfd_solve_level1`` available in ``Ansys CFD Enterprise`` product
-- ``cfd_solve_level2`` available in ``Ansys CFD Enterprise`` product
-- ``cfd_solve_level3`` available in ``Ansys CFD Enterprise`` product
-- ``fluent_meshing`` available in ``Ansys CFD Enterprise`` product
-
-Each increment may be available in other products. The product/increment mapping can be found in the
-`Licensing section of the Ansys Customer Portal `_.
\ No newline at end of file
+The following Ansys licensing increments provide rights to use DPF Server:
+
+- ``preppost`` available in the ``Ansys Mechanical Enterprise PrepPost`` product
+- ``meba`` available in the ``ANSYS Mechanical Enterprise Solver`` product
+- ``mech_2`` available in the ``ANSYS Mechanical Premium`` product
+- ``mech_1`` available in the ``ANSYS Mechanical Pro`` product
+- ``ansys`` available in the ``ANSYS Mechanical Enterprise`` product
+- ``dynapp`` available in the ``ANSYS LS-DYNA PrepPost`` product
+- ``vmotion`` available in the ``Ansys Motion`` product
+- ``acpreppost`` available in the ``Ansys Mechanical Enterprise`` product
+- ``acdi_adprepost`` available in the ``Ansys AUTODYN`` and ``Ansys AUTODYN PrepPost`` products
+- ``cfd_preppost`` available in the ``Ansys CFD Enterprise`` product
+- ``cfd_preppost_pro`` available in the ``Ansys CFD Enterprise`` product
+- ``vmotion_post`` available in the ``Ansys Motion Post`` product
+- ``vmotion_pre`` available in the ``Ansys Motion Pre`` product
+- ``advanced_meshing`` available in the ``Ansys CFD Enterprise`` product
+- ``fluent_meshing_pro`` available in the ``Ansys CFD Enterprise`` product
+- ``fluent_setup_post`` available in the ``Ansys CFD Enterprise`` product
+- ``fluent_setup_post_pro`` available in the ``Ansys CFD Enterprise`` product
+- ``acfx_pre`` available in the ``Ansys CFD Enterprise`` product
+- ``cfd_base`` available in the ``Ansys CFD Enterprise`` product
+- ``cfd_solve_level1`` available in the ``Ansys CFD Enterprise`` product
+- ``cfd_solve_level2`` available in the ``Ansys CFD Enterprise`` product
+- ``cfd_solve_level3`` available in the ``Ansys CFD Enterprise`` product
+- ``fluent_meshing`` available in the ``Ansys CFD Enterprise`` product
+
+Each increment may be available in other products. On the Ansys Customer Portal,
+the `Licensing section `_
+provides product/increment mapping.
\ No newline at end of file
diff --git a/docs/source/user_guide/how_to.rst b/docs/source/user_guide/how_to.rst
index 2b981d0d22..c121b8bafe 100644
--- a/docs/source/user_guide/how_to.rst
+++ b/docs/source/user_guide/how_to.rst
@@ -26,7 +26,7 @@ How-tos
.. image:: ../images/plotting/pontoon_strain.png
- .. card:: Create Custom Operators
+ .. card:: Create custom operators
:link: user_guide_custom_operators
:link-type: ref
:width: 25%
diff --git a/docs/source/user_guide/index.rst b/docs/source/user_guide/index.rst
index 57795aaab3..65b8cf0a28 100644
--- a/docs/source/user_guide/index.rst
+++ b/docs/source/user_guide/index.rst
@@ -4,17 +4,15 @@
User guide
==========
-PyDPF-Core is a Python client API for accessing DPF (Data Processing Framework)
-postprocessing capabilities. The ``ansys.dpf.core`` package makes highly efficient
+PyDPF-Core is a Python client API for accessing DPF postprocessing
+capabilities. The ``ansys.dpf.core`` package makes highly efficient
computation, customization, and remote postprocessing accessible in Python.
-This section has the following goals:
+The goals of this section are to:
- Describe the most-used DPF entities and how they can help you to access and modify solver data.
- - Provide simple how-tos for tackling most common use cases.
+ - Provide simple how-tos for tackling the most common use cases.
-Other sections of this guide include :ref:`ref_concepts`, :ref:`ref_api_section`,
-:ref:`ref_dpf_operators_reference`, and :ref:`gallery`.
.. include::
main_entities.rst
diff --git a/docs/source/user_guide/main_entities.rst b/docs/source/user_guide/main_entities.rst
index bb3b6f0f4d..b98991d0f0 100644
--- a/docs/source/user_guide/main_entities.rst
+++ b/docs/source/user_guide/main_entities.rst
@@ -12,7 +12,7 @@ DPF most-used entities
.. card-carousel:: 2
- .. card:: DPF Model
+ .. card:: DPF model
:link: user_guide_model
:link-type: ref
:width: 25%
@@ -20,7 +20,7 @@ DPF most-used entities
.. image:: ../images/drawings/model.png
- .. card:: Fields Container and Fields
+ .. card:: Fields container and fields
:link: ref_user_guide_fields_container
:link-type: ref
:width: 25%
diff --git a/docs/source/user_guide/model.rst b/docs/source/user_guide/model.rst
index 5049880f88..69f2928028 100644
--- a/docs/source/user_guide/model.rst
+++ b/docs/source/user_guide/model.rst
@@ -77,7 +77,7 @@ To access all information about an analysis, you can use model metadata:
- Mesh
- Available results
-This example shows you get the analysis type:
+This example shows how you get the analysis type:
.. code-block:: default
@@ -127,7 +127,7 @@ This example shows how you get time sets:
[1.]
-For a description of the ```Metadata``` object, see :ref:`ref_model`.
+For a description of the ``Metadata`` object, see :ref:`ref_model`.
Model results
-------------
diff --git a/docs/source/user_guide/operators.rst b/docs/source/user_guide/operators.rst
index 795cbcd104..7c7e14dc2a 100644
--- a/docs/source/user_guide/operators.rst
+++ b/docs/source/user_guide/operators.rst
@@ -57,8 +57,8 @@ Create operators
~~~~~~~~~~~~~~~~
Each operator is of type :ref:`ref_operator`. You can create an instance
in Python with any of the derived classes available in the
-package :ref:`ref_operators_package` or directly with the class :ref:`ref_operator`
-using the internal name string that indicates the operator type.
+package :ref:`ref_operators_package` or directly with the :ref:`ref_operator`
+class using the internal name string that indicates the operator type.
For more information, see :ref:`ref_dpf_operators_reference`.
This example shows how to create the displacement operator:
@@ -97,7 +97,7 @@ operator by printing it:
Alternatively, you can instantiate result providers using the ``Model`` object.
For more information, see :ref:`user_guide_model`.
-When using this model's result usage, file paths for the results are directly
+When using this model's results, file paths for the results are directly
connected to the operator, which means that you can only instantiate
available results for your result files:
@@ -113,7 +113,7 @@ available results for your result files:
Connect operators
~~~~~~~~~~~~~~~~~
-The only required input for the displacement operator is ``data_sources`` (see above).
+The only required input for the displacement operator is the ``data_sources``input (see above).
To compute an output in the ``fields_container`` object, which contains the displacement
results, you must provide paths for the result files.
@@ -200,7 +200,7 @@ like this one:
DPFServerException: U<-Data sources are not defined.
-For more information on using the fields container, see :ref:`ref_user_guide_fields_container`.
+For more information on using a fields container, see :ref:`ref_user_guide_fields_container`.
Chain operators
@@ -260,8 +260,8 @@ On an industrial model, however, you should use code like this:
In the preceding example, only the maximum displacements in the X, Y, and Z
components are transferred and returned as a numpy array.
-For small data sets, you can compute the maximum of the array in NumpPy.
-While there might be times where having the entire data array for a given
+For small data sets, you can compute the maximum of the array in `NumpPy `_.
+While there may be times where having the entire data array for a given
result type is necessary, many times it is not necessary. In these
cases, it is faster not to transfer the array to Python but rather to
compute the maximum of the fields container within DPF and then return
@@ -311,10 +311,10 @@ These operators provide for reading data from solver files or from standard file
- For Abaqus, ODB files are supported.
To read these files, different readers are implemented as plugins.
-Plugins can be loaded on demand in any DPF scripting language with the "load library" methods.
+Plugins can be loaded on demand in any DPF scripting language with "load library" methods.
File readers can be used generically thanks to the DPF result providers, which means that the same operators can be used for any file types.
-This example shows how to read a displacement or a stress for any file:
+This example shows how to read a displacement and a stress for any file:
.. code-block:: python
@@ -395,7 +395,8 @@ to export the results in a given format to either use them in another
environment or save them for future use with DPF. Supported file formats
for export include VTK, H5, CSV, and TXT (serializer operator). Export
operators often match with import operators, allowing you to reuse data.
-In :ref:`ref_dpf_operators_reference`, the **Serialization** category
+In :ref:`ref_dpf_operators_reference`, both the **Entry**
+and **Premium** sections have a **Serialization** category that
displays available import and export operators.
@@ -441,5 +442,5 @@ Python client is not on the same machine as the server:
API reference
~~~~~~~~~~~~~
For a list of all operators in DPF, see :ref:`ref_dpf_operators_reference`
-or :ref:`ref_operators_package`. For more information about the
+or the package :ref:`ref_operators_package`. For more information about the
class itself, see :ref:`ref_operator`.
diff --git a/docs/source/user_guide/server_context.rst b/docs/source/user_guide/server_context.rst
index e7c0471e22..72193da932 100644
--- a/docs/source/user_guide/server_context.rst
+++ b/docs/source/user_guide/server_context.rst
@@ -4,36 +4,31 @@
Server context
==============
-What is server context
-----------------------
-
The :class:`ServerContext ` class drives the
default capabilities a server starts with.
The server context is composed of the following information:
- ``context_type``, a :class:`LicensingContextType `
- class object that defines if a License checkout is required or not.
-- the ``xml_path`` that sets DPF default operators capabilities.
+ class object that defines whether a license checkout is required.
+- ``xml_path``, which sets DPF default operator capabilities.
-For more information,
-see :class:`AvailableServerContexts `
-and :ref:`user_guide_xmlfiles`.
+For more information, see the :class:`AvailableServerContexts `
+class and :ref:`user_guide_xmlfiles`.
Two main licensing context type capabilities are available:
-- Entry (default): Loads the minimum capabilities without requiring any license checkout.
-- Premium: Enables the Entry capabilities and the capabilities that require a license checkout.
- More operators are available.
+- **Entry:** This context, which is the default, loads the minimum capabilities without requiring any license checkout.
+- **Premium:** This context enables **Entry** capabilities and the capabilities that require a license checkout, making
+ more operators available.
-The operators list for each licensing context type is available at
-:ref:`ref_dpf_operators_reference`.
+For the operator list for each licensing context type, see :ref:`ref_dpf_operators_reference`.
-Getting started with Entry capabilities
----------------------------------------
+Entry capabilities
+------------------
-Find the list of operators available when the context is Entry at :ref:`ref_dpf_operators_reference`.
-This won't check out any license.
+The following code finds the list of operators available when the :ref:`ref_dpf_operators_reference` context
+is **Entry**. This context won't check out any license.
.. code-block::
@@ -47,11 +42,11 @@ This won't check out any license.
Server Context of type LicensingContextType.entry with no xml path
-Getting started with Premium capabilities
------------------------------------------
+Premium capabilities
+--------------------
-Find the list of operators available when the context is Premium at :ref:`ref_dpf_operators_reference`.
-This checks out a license.
+The following code find the list of operators available when the context is :ref:`ref_dpf_operators_reference`
+context is **Premium**. This context checks out a license.
.. code-block::
@@ -68,10 +63,11 @@ This checks out a license.
Server Context of type LicensingContextType.premium with no xml path
-Changing server context from Entry to Premium
----------------------------------------------
+Change server context from Entry to Premium
+-------------------------------------------
-Once an Entry server is started, it can be upgraded to Premium:
+Once a DPF Server is started in **Entry** context, it can be upgraded to the
+**Premium** context:
.. code-block::
@@ -99,11 +95,13 @@ Once an Entry server is started, it can be upgraded to Premium:
Server Context of type LicensingContextType.premium with no xml path
-Changing the default server context
------------------------------------
+Change the default server context
+---------------------------------
-Entry is the default server context. This can be changed either using the ANSYS_DPF_SERVER_CONTEXT
-environment variable (see ``) or writing:
+The default context for the server is **Entry**. You can change the context using
+the ``ANSYS_DPF_SERVER_CONTEXT`` environment variable. For more information, see
+the :module: `` module). You can also change the server context
+with this code:
.. code-block::
@@ -121,9 +119,9 @@ environment variable (see ``) or writing:
Release history
---------------
-The Entry server context is available starting with server version 6.0
-(Ansys 2023 R2).
+The **Entry** server context is available in server version 6.0
+(Ansys 2023 R2) and later.
-With a server version lower than 6.0, Premium is the default server
-context and all the Premium operators at :ref:`ref_dpf_operators_reference`
-are available (depending only on their release date).
\ No newline at end of file
+With a server version earlier than 6.0, **Premium** is the default server
+context and all **Premium** :ref:`ref_dpf_operators_reference`
+are available, depending only on their release date.
\ No newline at end of file
diff --git a/docs/source/user_guide/server_types.rst b/docs/source/user_guide/server_types.rst
index 3d4ab04ab2..8ee493a7bd 100644
--- a/docs/source/user_guide/server_types.rst
+++ b/docs/source/user_guide/server_types.rst
@@ -9,17 +9,17 @@ Terminology
DPF is based on a **client-server** architecture.
-The DPF Server is a set of files that enables DPF capabilities.
+A DPF Server is a set of files that enables DPF capabilities.
-PyDPF-Core is a Python client API communicating with a DPF Server, either through
-the network using **gRPC** **or** directly **in** the same **process**.
+PyDPF-Core is a Python client API communicating with a DPF Server, either
+directly **in the same process** or through the network using **gRPC**.
-Getting started with DPF in process server
-------------------------------------------
+DPF Server in the same process
+------------------------------
Default use of a PyDPF-Core client and a DPF Server is in the same process,
-using :class:`InProcess ` class.
+using the :class:`InProcess ` class.
.. code-block::
@@ -33,7 +33,7 @@ using :class:`InProcess ` class.
-This server can now be used to instantiate Models, Operators, and so on.
+This DPF Server can now be used to instantiate models, operators, and more.
.. code-block::
@@ -45,10 +45,11 @@ This server can now be used to instantiate Models, Operators, and so on.
local_model = dpf.Model(examples.find_simple_bar(), server=local_server)
-Getting started with DPF GRPC server
-------------------------------------
+DPF Server through the network using gRPC
+-----------------------------------------
-GRPC communication is enabled using :class:`GrpcServer `.
+The :class:`GrpcServer ` class is used
+to enable gRPC communication:
.. code-block::
@@ -75,7 +76,7 @@ You can obtain the server port and IP address:
DPF Server: {'server_ip': '127.0.0.1', 'server_port': 50052, 'server_process_id': 9999, 'server_version': '6.0', 'os': 'nt'}
-From a another machine, you can connect remotely to this server and instantiate Models, Operators, and so on:
+From another machine, you can connect remotely to this DPF Server and instantiate models, operators, and more:
.. code-block::
@@ -89,15 +90,16 @@ From a another machine, you can connect remotely to this server and instantiate
from ansys.dpf.core import examples
remote_model = dpf.Model(examples.find_simple_bar(), server=grpc_remote_server)
-GRPC server use also enables distributed computation capabilities. To learn more about
-distributed computation with DPF, see :ref:`distributed_post`.
+Through the network using gRPC, a DPF sever enables distributed computation capabilities.
+For more information, see :ref:`distributed_post`.
-Starting a server using a configuration
----------------------------------------
+DPF Server startup using a configuration
+----------------------------------------
The different DPF server types can be started using one of the
-:class:`AvailableServerConfigs ` configurations.
+:class:`AvailableServerConfigs `
+configurations.
.. code-block::
@@ -117,26 +119,24 @@ Advanced concepts and release history
The communication logic with a DPF server is defined when starting it using
an instance of the :class:`ServerConfig ` class.
Different predefined server configurations are available in DPF,
-each answering a different use-case
-(See the :class:`AvailableServerConfigs ` class).
-
-- The :class:`GrpcServer ` configuration is available starting
- with server version 4.0 (Ansys 2022 R2).
- It allows you to remotely connect to a DPF server across a network by telling the client
- to communicate with this server via the gRPC communication protocol.
- Although it can be used to communicate with a DPF server running on the same local machine, the next configuration is better for this option.
-- The :class:`InProcessServer ` configuration is available starting
- with server version 4.0 (Ansys 2022 R2).
- It indicates to the client that a DPF server is installed on the local machine, enabling direct
- calls to the server binaries from within the client's own Python process.
- This removes the need to copy and send data between the client and server, and makes calls
- to the server functionalities much faster as well as using less memory.
-- The :class:`LegacyGrpcServer ` configuration is the only one
- available for server versions below 4.0
- (Ansys 2022 R1, Ansys 2021 R2 and Ansys 2021 R1).
+each answering a different use case. For more information, see the
+:class:`AvailableServerConfigs ` class.
+
+- The :class:`GrpcServer ` configuration is available in
+ server version 4.0 (Ansys 2022 R2) and later. It allows you to remotely connect to a DPF server
+ across a network by telling the client to communicate with this server via the gRPC communication protocol.
+ Although it can be used to communicate with a DPF server running on the same local machine, the next
+ configuration is better for this option.
+- The :class:`InProcessServer ` configuration is available
+ in server version 4.0 (Ansys 2022 R2) and later. It indicates to the client that a DPF server is
+ installed on the local machine, enabling direct calls to the server binaries from within the client's
+ own Python process. This removes the need to copy and send data between the client and server, and it
+ makes calls to the server functionalities much faster and uses less memory.
+- The :class:`LegacyGrpcServer ` configuration is
+ the only one available for server versions 4.0 and earlier (Ansys 2022 R1, 2021 R2, and 2021 R1).
The client communicates with a local or remote DPF server via the gRPC communication protocol.
-For DPF with Ansys 2023 R1 and newer, the default configuration is set to :class:`InProcess `,
-meaning that servers are launched on the local machine.
+For DPF with Ansys 2023 R1 and later, :class:`InProcessServer `
+is the default configuration, which means that servers are launched on the local machine.
To launch a DPF server on a remote machine and communicate with it using gRPC, use
the :class:`GrpcServer ` configuration as shown in :ref:`ref_server_types_example`.
diff --git a/docs/source/user_guide/troubleshooting.rst b/docs/source/user_guide/troubleshooting.rst
index 94d1fccb1e..576d5922ee 100644
--- a/docs/source/user_guide/troubleshooting.rst
+++ b/docs/source/user_guide/troubleshooting.rst
@@ -13,10 +13,10 @@ Start the DPF server
~~~~~~~~~~~~~~~~~~~~~
When using PyDPF-Core to start the server with the
:py:meth:`start_local_server() ` method
-or when starting the server manually with the ``Ans.Dpf.Grpc.sh``or ``Ans.Dpf.Grpc.bat``
+or when starting the server manually with the ``Ans.Dpf.Grpc.sh`` or ``Ans.Dpf.Grpc.bat``
file, a Python error might occur: ``TimeoutError: Server did not start in 10 seconds``.
This kind of error might mean that the server or its dependencies were not found. Ensure that
-the environment variable ``AWP_ROOT{VER}`` is set, where ``VER``is the three-digit numeric
+the ``AWP_ROOT{VER}`` environment variable is set, where ``VER`` is the three-digit numeric
format for the version, such as ``221`` or ``222``.
Connect to the DPF server
@@ -75,13 +75,13 @@ When trying to plot a result with DPF, the following error might be raised:
ModuleNotFoundError: No module named 'pyvista'
-In that case, simply install PyVista with:
+In that case, simply install `PyVista `_` with this command:
.. code-block:: default
pip install pyvista
-Another option is to install PyVista along PyDPF-Core. For more information, see
+Another option is to install PyVista along with PyDPF-Core. For more information, see
:ref:`Install with plotting capabilities`
Performance issues
diff --git a/docs/source/user_guide/xmlfiles.rst b/docs/source/user_guide/xmlfiles.rst
index 9d7bbe402c..d9a3028d58 100644
--- a/docs/source/user_guide/xmlfiles.rst
+++ b/docs/source/user_guide/xmlfiles.rst
@@ -3,8 +3,8 @@
=============
DPF XML files
=============
-This page describes the ``DataProcessingCore.xml``and ``Plugin.xml`` XML files
-provided with the DPF software. These XML files work on both Linux and Windows
+This page describes the ``DataProcessingCore.xml`` and ``Plugin.xml`` XML files
+provided with DPF. These XML files work on both Linux and Windows
because they contain content for both of these operating systems.
These XML files must be located alongside the plugin DLL files on Windows or
@@ -63,7 +63,7 @@ In this XML file, some of the elements are optional, and many of the
elements have Linux-specific versus Windows-specific child elements.
.. caution::
- To ensure that the DPF software operates correctly, modify this XML file
+ To ensure that DPF operates correctly, modify this XML file
carefully. All paths specified in this XML file must adhere to the path
conventions of the respective operating system. For Linux paths, use
forward slashes (/). For Windows paths, use backward slashes (\\).
@@ -77,8 +77,8 @@ define the root folders for Ansys software installed on Linux and on Windows.
The path for the root folder ends with Ansys version information, ``v###``,
where ``###`` is the three-digit format for the installed version. For example,
-on Windows, the path for the root folder for Ansys 2022 R2likely looks something
-like ``C:\Program Files\ANSYS Inc\v222``.
+on Windows, the path for the root folder for Ansys 2022 R2 likely ends with
+``\ANSYS Inc\v222``.
The ``ANSYS_ROOT_FOLDER`` element is like an environment variable. You can use
this element in other XML files. For example, you might use it to find required
@@ -97,16 +97,16 @@ location.
```` element
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ```` element defines the plugins to load. The ```` or
-``Windows`` child element contains the operating system for plugins defined
-in child elements.
+```` child element contains the operating system for the plugins defined
+in the child elements.
-The ``native`` element defines DPF native operators. The further subdividing of
+The ```` element defines DPF native operators. The further subdividing of
plugins into ```` or ```` elements is optional. The ````
element, for example, would only be used with a debug version of the
``DataProcessingCore DLL/SO`` file.
The element names for plugins, such as ```` and ````, are used as
-*keys* when loading plugins. Each plugin must have a unique key.
+**keys** when loading plugins. Each plugin must have a unique key.
The element for each plug-in has child elements:
@@ -132,7 +132,7 @@ plugins in a folder defined by a ``MY_PLUGINS`` environment variable, you could
it in the XML file.
You specify environment variables in the same way as the ``ANSYS_ROOT_FOLDER``
-or ``THIS_XML_FOLDER`` variable. They are defined as $(…).
+or ``THIS_XML_FOLDER`` variable. They are defined as ``$(…)``.
In the Ansys installation, the default ``DataProcessingCore.xml`` file is located
next to the ``DataProcessingCore`` DLL or SO file. If you want to use a different
diff --git a/examples/02-modal-harmonic/README.txt b/examples/02-modal-harmonic/README.txt
index a1c27c4963..e2b8ab520d 100644
--- a/examples/02-modal-harmonic/README.txt
+++ b/examples/02-modal-harmonic/README.txt
@@ -2,5 +2,5 @@
Harmonic analysis examples
===========================
-These examples show how to use DPF to extract and manipulate,
+These examples show how to use DPF to extract and manipulate
results from harmonic or modal analyses.
diff --git a/examples/03-advanced/README.txt b/examples/03-advanced/README.txt
index 2fb814604c..b6426329d4 100644
--- a/examples/03-advanced/README.txt
+++ b/examples/03-advanced/README.txt
@@ -2,4 +2,4 @@
Advanced and miscellaneous examples
===================================
-These demos show advanced use cases demonstrating high level of workflow customization
+These examples show advanced use cases to demonstrate the high level of workflow customization.
diff --git a/examples/04-file-IO/README.txt b/examples/04-file-IO/README.txt
index 7b3f51fc2b..304a41827a 100644
--- a/examples/04-file-IO/README.txt
+++ b/examples/04-file-IO/README.txt
@@ -3,4 +3,4 @@
File manipulation and input-output examples
===========================================
These examples show how to manipulate files,
-import or export from or to specific formats.
+as well as importing or exporting from or to specific formats.
diff --git a/examples/05-plotting/README.txt b/examples/05-plotting/README.txt
index 941a88e4d0..687c142bcb 100644
--- a/examples/05-plotting/README.txt
+++ b/examples/05-plotting/README.txt
@@ -2,4 +2,4 @@
Plotting examples
=================
-These examples show how to use the ``DpfPlotter`` module.
\ No newline at end of file
+These examples show how to use the :class:`ansys.dpf.core.plotter.DpfPlotter` class.
\ No newline at end of file