Skip to content

Commit

Permalink
MAINT: improve examples (#609)
Browse files Browse the repository at this point in the history
* Improve 00-basic_example.py

* Improve 01-basic_operators.py

* Improve 02-basic_field_containers.py

* Improve 03-create_entities.py

* Improve 04-basic-load-file.py

* Improve 07-use_result_helpers

* Improve 08-results_over_time_subset

* Improve 09-results_over_space_subset

* Improve 11-server_types

* Improve 13-nodes_in_local_coordinate_system

* Improve 00-basic_transient

* Improve 01-transient_easy_time_scoping

* Improve plots in 04-basic-load-file

* Improve 01-modal_cyclic

* Improve 02-cyclic_multi_stage

* Improve 03-compare_modes

* Improve 03-exchange_data_between_servers

* Improve 04-extrapolation_stress_3d

* 05-extrapolation_strain_2d

* Improve 06-stress_gradient_path

* Improve 00-hdf5_double_float_comparison

* Improve 02-solution_combination

* Improve 04-plot_on_path

* Improve 01-package_python_operators

* Improve 02-python_operators_with_dependencies

* Fix style check

* Apply suggestions from code review

Co-authored-by: Maxime Rey <87315832+MaxJPRey@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Maxime Rey <87315832+MaxJPRey@users.noreply.github.com>

* Fix codacy by removing f-string

* Fix codacy adding shell=True when running subprocess

* Fix codacy, change shell=False in subrocess run

* Fix codacy, try to ignore subprocess run line

* Fix codacy, ignore subprocess run

* Resolve Codacy security warning for subprocess.run

Co-authored-by: Maxime Rey <87315832+MaxJPRey@users.noreply.github.com>
Co-authored-by: paul.profizi <paul.profizi@ansys.com>
  • Loading branch information
3 people authored Nov 17, 2022
1 parent e313898 commit fbe7103
Show file tree
Hide file tree
Showing 24 changed files with 371 additions and 224 deletions.
11 changes: 6 additions & 5 deletions examples/00-basic/00-basic_example.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
"""
.. _ref_basic_example:
Expand All @@ -6,11 +7,11 @@
This example shows how to open a result file and do some
basic postprocessing.
If you have Ansys 2021 R1 installed, starting DPF is quite easy
If you have Ansys 2021 R1 or higher installed, starting DPF is quite easy
as DPF-Core takes care of launching all the services that
are required for postprocessing Ansys files.
First, import the DPF-Core module as ``dpf_core`` and import the
First, import the DPF-Core module as ``dpf`` and import the
included examples file.
Expand All @@ -21,8 +22,8 @@

###############################################################################
# Next, open an example and print out the ``model`` object. The
# ``Model`` class helps to organize access methods for the result by
# keeping track of the operators and data sources used by the result
# :class:`Model <ansys.dpf.core.model.Model>` class helps to organize access methods
# for the result by keeping track of the operators and data sources used by the result
# file.
#
# Printing the model displays:
Expand All @@ -35,7 +36,7 @@
# Also, note that the first time you create a DPF object, Python
# automatically attempts to start the server in the background. If you
# want to connect to an existing server (either local or remote), use
# :func:`dpf.connect_to_server`.
# :func:`ansys.dpf.core.connect_to_server`.

model = dpf.Model(examples.find_simple_bar())
print(model)
Expand Down
5 changes: 3 additions & 2 deletions examples/00-basic/01-basic_operators.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
"""
.. _ref_basic_operators_example:
Expand Down Expand Up @@ -48,12 +49,12 @@
# Connect to the data sources of the model.
disp_op.inputs.data_sources.connect(model.metadata.data_sources)

# Create a field container norm operator and connect it to the
# Create a fields container norm operator and connect it to the
# displacement operator to chain the operators.
norm_op = dpf.Operator("norm_fc")
norm_op.inputs.connect(disp_op.outputs)

# Create a field container min/max operator and connect it to the
# Create a fields container min/max operator and connect it to the
# output of the norm operator.
mm_op = dpf.Operator("min_max_fc")
mm_op.inputs.connect(norm_op.outputs)
Expand Down
5 changes: 3 additions & 2 deletions examples/00-basic/02-basic_field_containers.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# noqa: D400
"""
.. _ref_basic_field_example:
Field and field containers overview
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In DPF, the field is the main simulation data container. During a numerical
simulation, result data is defined by values associated to entities
simulation, the result data is defined by values associated to entities
(scoping). These entities are a subset of a model (support).
Because field data is always associated to its scoping and support,
Because the field data is always associated to its scoping and support,
the field is a self-describing piece of data. A field is also
defined by its parameters, such as dimensionality, unit, and location.
For example, a field can describe any of the following:
Expand Down
64 changes: 33 additions & 31 deletions examples/00-basic/03-create_entities.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
"""
.. _ref_create_entities_example:
Expand Down Expand Up @@ -27,11 +28,12 @@


def search_sequence_numpy(arr, seq):
"""Find a sequence in an array and return its index."""
indexes = np.where(np.isclose(arr, seq[0]))
for index in np.nditer(indexes[0]):
if index % 3 == 0:
if np.allclose(arr[index + 1], seq[1]) and np.allclose(
arr[index + 2], seq[2]
arr[index + 2], seq[2]
):
return index
return -1
Expand All @@ -41,22 +43,22 @@ def search_sequence_numpy(arr, seq):
# Add nodes:
n_id = 1
for i, x in enumerate(
[
float(i) * length / float(num_nodes_in_length)
for i in range(0, num_nodes_in_length)
]
[
float(i) * length / float(num_nodes_in_length)
for i in range(0, num_nodes_in_length)
]
):
for j, y in enumerate(
[
float(i) * width / float(num_nodes_in_width)
for i in range(0, num_nodes_in_width)
]
[
float(i) * width / float(num_nodes_in_width)
for i in range(0, num_nodes_in_width)
]
):
for k, z in enumerate(
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(0, num_nodes_in_depth)
]
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(0, num_nodes_in_depth)
]
):
mesh.nodes.add_node(n_id, [x, y, z])
n_id += 1
Expand All @@ -77,22 +79,22 @@ def search_sequence_numpy(arr, seq):
# Add solid elements (linear hexa with eight nodes):
e_id = 1
for i, x in enumerate(
[
float(i) * length / float(num_nodes_in_length)
for i in range(0, num_nodes_in_length - 1)
]
[
float(i) * length / float(num_nodes_in_length)
for i in range(num_nodes_in_length - 1)
]
):
for j, y in enumerate(
[
float(i) * width / float(num_nodes_in_width)
for i in range(0, num_nodes_in_width - 1)
]
[
float(i) * width / float(num_nodes_in_width)
for i in range(num_nodes_in_width - 1)
]
):
for k, z in enumerate(
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(0, num_nodes_in_depth - 1)
]
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(num_nodes_in_depth - 1)
]
):
coord1 = np.array([x, y, z])
connectivity = []
Expand All @@ -117,12 +119,12 @@ def search_sequence_numpy(arr, seq):

###############################################################################
# Create displacement fields over time with three time sets.
# Here the displacement on each node is the value of its x, y, and
# z coordinates for time 1.
# The displacement on each node is two times the value of its x, y,
# and z coordinates for time 2.
# The displacement on each node is three times the value of its x,
# y, and z coordinates for time 3.
# For the first time set, the displacement on each node is the
# value of its x, y, and z coordinates.
# For the second time set, the displacement on each node is two
# times the value of its x, y, and z coordinates.
# For the third time set, the displacement on each node is three
# times the value of its x, y, and z coordinates.
num_nodes = mesh.nodes.n_nodes
time1_array = coordinates_data
time2_array = 2.0 * coordinates_data
Expand Down
29 changes: 28 additions & 1 deletion examples/00-basic/04-basic-load-file.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
"""
.. _ref_basic_load_file_example:
Expand Down Expand Up @@ -87,9 +88,35 @@
###############################################################################
# Make operations over the imported fields container
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Use this fields container:
# Use this fields container to get the minimum displacement:

min_max_op = dpf.operators.min_max.min_max_fc()
min_max_op.inputs.fields_container.connect(downloaded_fc_out)
min_field = min_max_op.outputs.field_min()
min_field.data

###############################################################################
# Compare the original and the downloaded fields container
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Subtract the two fields and plot an error map:
abs_error = (fc_out - downloaded_fc_out).eval()

divide = dpf.operators.math.component_wise_divide()
divide.inputs.fieldA.connect(fc_out - downloaded_fc_out)
divide.inputs.fieldB.connect(fc_out)
scale = dpf.operators.math.scale()
scale.inputs.field.connect(divide)
scale.inputs.ponderation.connect(100.)
rel_error = scale.eval()

###############################################################################
# Plot both absolute and relative error fields
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Note that the absolute error is bigger where the displacements are
# bigger, at the tip of the geometry.
# Instead, the relative error is similar accross the geometry since we
# are dividing by the displacements ``fc_out``.
# Both plots show errors that can be understood as zero due to machine precision
# (1e-12 mm for the absolute error and 1e-5% for the relative error).
mesh.plot(abs_error, scalar_bar_args={'title': "Absolute error [mm]"})
mesh.plot(rel_error, scalar_bar_args={'title': "Relative error [%]"})
4 changes: 3 additions & 1 deletion examples/00-basic/07-use_result_helpers.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# noqa: D400
"""
.. _ref_use_result_helpers:
Use result helpers to load custom data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``Result`` class, which is an instance created by the ``Model``, gives
The :class:`Result <ansys.dpf.core.results.Result>` class, which is an instance
created by the :class:`Model <ansys.dpf.core.model.Model>`, gives
access to helpers for requesting results on specific mesh and time scopings.
With these helpers, working on a custom spatial and temporal subset of the
model is straightforward.
Expand Down
4 changes: 3 additions & 1 deletion examples/00-basic/08-results_over_time_subset.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# noqa: D400
"""
.. _ref_results_over_time:
Scope results over custom time domains
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``Result`` class, which are instances created by the ``Model``, give
The :class:`Result <ansys.dpf.core.results.Result>` class, which are instances
created by the :class:`Model <ansys.dpf.core.model.Model>`, give
access to helpers for requesting results on specific mesh and time scopings.
With these helpers, working on a temporal subset of the
model is straightforward. In this example, different ways to choose the temporal subset to
Expand Down
14 changes: 8 additions & 6 deletions examples/00-basic/09-results_over_space_subset.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# noqa: D400
"""
.. _ref_results_over_space:
Scope results over custom space domains
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``Result`` class, which are instances created by the ``Model``, give
The :class:`Result <ansys.dpf.core.results.Result>` class, which are instances
created by the :class:`Model <ansys.dpf.core.model.Model>`, give
access to helpers for requesting results on specific mesh and time scopings.
With these helpers, working on a spatial subset of the model is straightforward.
In this example, different ways to choose the spatial subset to
Expand Down Expand Up @@ -90,7 +92,7 @@
###############################################################################
# Get the ``mesh_scoping`` of a named selection:

mesh_scoping = model.metadata.named_selection('_CM82')
mesh_scoping = model.metadata.named_selection("_CM82")
print(mesh_scoping)

###############################################################################
Expand All @@ -100,13 +102,13 @@

###############################################################################
# Equivalent to:
volume = model.results.elemental_volume.on_named_selection('_CM82')
volume = model.results.elemental_volume.on_named_selection("_CM82")

###############################################################################
# Equivalent to:
ns_provider = dpf.operators.scoping.on_named_selection(
requested_location=dpf.locations.elemental,
named_selection_name='_CM82',
named_selection_name="_CM82",
data_sources=model,
)
volume = model.results.elemental_volume(mesh_scoping=ns_provider).eval()
Expand Down Expand Up @@ -161,8 +163,8 @@

###############################################################################
elemental_stress = model.results.stress.on_location(dpf.locations.elemental)(
mesh_scoping=scopings_container) \
.eval()
mesh_scoping=scopings_container
).eval()
print(elemental_stress)

for field in elemental_stress:
Expand Down
25 changes: 12 additions & 13 deletions examples/00-basic/11-server_types.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# noqa: D400
"""
.. _ref_server_types_example:
Communicate in process or via gRPC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting with Ansys 2022 R2, PyDPF can communication either In Process or via gRPC
Starting with Ansys 2022 R2, PyDPF can communicate either via In Process or via gRPC
with DPF C++ core server (``Ans.Dpf.Grpc.exe``). To choose which type of
:class:`ansys.dpf.core.server_types.BaseServer` (object defining the type of communication
and the server instance to communicate with) to use, a
:class:`ansys.dpf.core.server_factory.ServerConfig` class should be used.
Until Ansys 2022R1, only gRPC communication using python module ansys.grpc.dpf is supported
Until Ansys 2022R1, only gRPC communication using python module ``ansys.grpc.dpf`` is supported
(now called :class:`ansys.dpf.core.server_types.LegacyGrpcServer`), starting with Ansys 2022 R2,
three types of servers are supported:
Expand Down Expand Up @@ -45,9 +46,7 @@
###############################################################################
# Equivalent to:

in_process_config = dpf.ServerConfig(
protocol=None, legacy=False
)
in_process_config = dpf.ServerConfig(protocol=None, legacy=False)
grpc_config = dpf.ServerConfig(
protocol=dpf.server_factory.CommunicationProtocols.gRPC, legacy=False
)
Expand All @@ -64,14 +63,14 @@
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

in_process_field = dpf.fields_factory.create_scalar_field(2, server=in_process_server)
in_process_field.append([1.], 1)
in_process_field.append([2.], 2)
in_process_field.append([1.0], 1)
in_process_field.append([2.0], 2)
grpc_field = dpf.fields_factory.create_scalar_field(2, server=grpc_server)
grpc_field.append([1.], 1)
grpc_field.append([2.], 2)
grpc_field.append([1.0], 1)
grpc_field.append([2.0], 2)
legacy_grpc_field = dpf.fields_factory.create_scalar_field(2, server=legacy_grpc_server)
legacy_grpc_field.append([1.], 1)
legacy_grpc_field.append([2.], 2)
legacy_grpc_field.append([1.0], 1)
legacy_grpc_field.append([2.0], 2)

print(in_process_field, type(in_process_field._server), in_process_field._server)
print(grpc_field, type(grpc_field._server), grpc_field._server)
Expand All @@ -87,8 +86,8 @@

dpf.SERVER_CONFIGURATION = dpf.AvailableServerConfigs.GrpcServer
grpc_field = dpf.fields_factory.create_scalar_field(2)
grpc_field.append([1.], 1)
grpc_field.append([2.], 2)
grpc_field.append([1.0], 1)
grpc_field.append([2.0], 2)
print(grpc_field, type(grpc_field._server), grpc_field._server)

# Go back to default config:
Expand Down
Loading

0 comments on commit fbe7103

Please sign in to comment.