Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAINT: improve examples #609

Merged
merged 34 commits into from
Nov 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
a032135
Improve 00-basic_example.py
GuillemBarroso Nov 7, 2022
5aab8a8
Improve 01-basic_operators.py
GuillemBarroso Nov 7, 2022
711f5fc
Improve 02-basic_field_containers.py
GuillemBarroso Nov 7, 2022
66c2f7d
Improve 03-create_entities.py
GuillemBarroso Nov 8, 2022
238ce6a
Improve 04-basic-load-file.py
GuillemBarroso Nov 8, 2022
e662172
Improve 07-use_result_helpers
GuillemBarroso Nov 8, 2022
50ce5fb
Improve 08-results_over_time_subset
GuillemBarroso Nov 8, 2022
b6aab4f
Improve 09-results_over_space_subset
GuillemBarroso Nov 8, 2022
8e9e969
Improve 11-server_types
GuillemBarroso Nov 8, 2022
7eb4239
Improve 13-nodes_in_local_coordinate_system
GuillemBarroso Nov 8, 2022
8ff7971
Improve 00-basic_transient
GuillemBarroso Nov 8, 2022
2fe9df9
Improve 01-transient_easy_time_scoping
GuillemBarroso Nov 8, 2022
fc9d260
Improve plots in 04-basic-load-file
GuillemBarroso Nov 8, 2022
ef2acd9
Improve 01-modal_cyclic
GuillemBarroso Nov 9, 2022
cec0ee9
Improve 02-cyclic_multi_stage
GuillemBarroso Nov 9, 2022
295dc2c
Improve 03-compare_modes
GuillemBarroso Nov 9, 2022
4960301
Improve 03-exchange_data_between_servers
GuillemBarroso Nov 9, 2022
f2ee538
Improve 04-extrapolation_stress_3d
GuillemBarroso Nov 9, 2022
0308db8
05-extrapolation_strain_2d
GuillemBarroso Nov 9, 2022
dd329f1
Improve 06-stress_gradient_path
GuillemBarroso Nov 9, 2022
f4c80ab
Improve 00-hdf5_double_float_comparison
GuillemBarroso Nov 9, 2022
479e413
Improve 02-solution_combination
GuillemBarroso Nov 9, 2022
d5f5ee4
Improve 04-plot_on_path
GuillemBarroso Nov 9, 2022
d71df8f
Improve 01-package_python_operators
GuillemBarroso Nov 9, 2022
2c8e963
Improve 02-python_operators_with_dependencies
GuillemBarroso Nov 9, 2022
5cb27e5
Fix style check
GuillemBarroso Nov 9, 2022
583c462
Apply suggestions from code review
GuillemBarroso Nov 10, 2022
26e22b9
Apply suggestions from code review
GuillemBarroso Nov 11, 2022
bc09f2b
Fix codacy by removing f-string
GuillemBarroso Nov 16, 2022
cef3bed
Fix codacy adding shell=True when running subprocess
GuillemBarroso Nov 16, 2022
14a624b
Fix codacy, change shell=False in subrocess run
GuillemBarroso Nov 16, 2022
0bd5e45
Fix codacy, try to ignore subprocess run line
GuillemBarroso Nov 16, 2022
560bd83
Fix codacy, ignore subprocess run
GuillemBarroso Nov 16, 2022
7fac551
Resolve Codacy security warning for subprocess.run
PProfizi Nov 16, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions examples/00-basic/00-basic_example.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
"""
.. _ref_basic_example:

Expand All @@ -6,11 +7,11 @@
This example shows how to open a result file and do some
basic postprocessing.

If you have Ansys 2021 R1 installed, starting DPF is quite easy
If you have Ansys 2021 R1 or higher installed, starting DPF is quite easy
as DPF-Core takes care of launching all the services that
are required for postprocessing Ansys files.

First, import the DPF-Core module as ``dpf_core`` and import the
First, import the DPF-Core module as ``dpf`` and import the
included examples file.


Expand All @@ -21,8 +22,8 @@

###############################################################################
# Next, open an example and print out the ``model`` object. The
# ``Model`` class helps to organize access methods for the result by
# keeping track of the operators and data sources used by the result
# :class:`Model <ansys.dpf.core.model.Model>` class helps to organize access methods
# for the result by keeping track of the operators and data sources used by the result
# file.
#
# Printing the model displays:
Expand All @@ -35,7 +36,7 @@
# Also, note that the first time you create a DPF object, Python
# automatically attempts to start the server in the background. If you
# want to connect to an existing server (either local or remote), use
# :func:`dpf.connect_to_server`.
# :func:`ansys.dpf.core.connect_to_server`.

model = dpf.Model(examples.find_simple_bar())
print(model)
Expand Down
5 changes: 3 additions & 2 deletions examples/00-basic/01-basic_operators.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@GuillemBarroso What is that skipping for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

D400 indicates that the first line should end with a period, see doc.

@PProfizi, as we discussed, I will add the rule of ignoring D400 for the examples in the next PR, when I update the layout of the repo and I add the pre-commit.

"""
.. _ref_basic_operators_example:

Expand Down Expand Up @@ -48,12 +49,12 @@
# Connect to the data sources of the model.
disp_op.inputs.data_sources.connect(model.metadata.data_sources)

# Create a field container norm operator and connect it to the
# Create a fields container norm operator and connect it to the
# displacement operator to chain the operators.
norm_op = dpf.Operator("norm_fc")
norm_op.inputs.connect(disp_op.outputs)

# Create a field container min/max operator and connect it to the
# Create a fields container min/max operator and connect it to the
# output of the norm operator.
mm_op = dpf.Operator("min_max_fc")
mm_op.inputs.connect(norm_op.outputs)
Expand Down
5 changes: 3 additions & 2 deletions examples/00-basic/02-basic_field_containers.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# noqa: D400
"""
.. _ref_basic_field_example:

Field and field containers overview
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In DPF, the field is the main simulation data container. During a numerical
simulation, result data is defined by values associated to entities
simulation, the result data is defined by values associated to entities
(scoping). These entities are a subset of a model (support).

Because field data is always associated to its scoping and support,
Because the field data is always associated to its scoping and support,
the field is a self-describing piece of data. A field is also
defined by its parameters, such as dimensionality, unit, and location.
For example, a field can describe any of the following:
Expand Down
64 changes: 33 additions & 31 deletions examples/00-basic/03-create_entities.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
"""
.. _ref_create_entities_example:

Expand Down Expand Up @@ -27,11 +28,12 @@


def search_sequence_numpy(arr, seq):
"""Find a sequence in an array and return its index."""
indexes = np.where(np.isclose(arr, seq[0]))
for index in np.nditer(indexes[0]):
if index % 3 == 0:
if np.allclose(arr[index + 1], seq[1]) and np.allclose(
arr[index + 2], seq[2]
arr[index + 2], seq[2]
):
return index
return -1
Expand All @@ -41,22 +43,22 @@ def search_sequence_numpy(arr, seq):
# Add nodes:
n_id = 1
for i, x in enumerate(
[
float(i) * length / float(num_nodes_in_length)
for i in range(0, num_nodes_in_length)
]
[
float(i) * length / float(num_nodes_in_length)
for i in range(0, num_nodes_in_length)
]
):
for j, y in enumerate(
[
float(i) * width / float(num_nodes_in_width)
for i in range(0, num_nodes_in_width)
]
[
float(i) * width / float(num_nodes_in_width)
for i in range(0, num_nodes_in_width)
]
):
for k, z in enumerate(
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(0, num_nodes_in_depth)
]
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(0, num_nodes_in_depth)
]
):
mesh.nodes.add_node(n_id, [x, y, z])
n_id += 1
Expand All @@ -77,22 +79,22 @@ def search_sequence_numpy(arr, seq):
# Add solid elements (linear hexa with eight nodes):
e_id = 1
for i, x in enumerate(
[
float(i) * length / float(num_nodes_in_length)
for i in range(0, num_nodes_in_length - 1)
]
[
float(i) * length / float(num_nodes_in_length)
for i in range(num_nodes_in_length - 1)
]
):
for j, y in enumerate(
[
float(i) * width / float(num_nodes_in_width)
for i in range(0, num_nodes_in_width - 1)
]
[
float(i) * width / float(num_nodes_in_width)
for i in range(num_nodes_in_width - 1)
]
):
for k, z in enumerate(
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(0, num_nodes_in_depth - 1)
]
[
float(i) * depth / float(num_nodes_in_depth)
for i in range(num_nodes_in_depth - 1)
]
):
coord1 = np.array([x, y, z])
connectivity = []
Expand All @@ -117,12 +119,12 @@ def search_sequence_numpy(arr, seq):

###############################################################################
# Create displacement fields over time with three time sets.
# Here the displacement on each node is the value of its x, y, and
# z coordinates for time 1.
# The displacement on each node is two times the value of its x, y,
# and z coordinates for time 2.
# The displacement on each node is three times the value of its x,
# y, and z coordinates for time 3.
# For the first time set, the displacement on each node is the
# value of its x, y, and z coordinates.
# For the second time set, the displacement on each node is two
# times the value of its x, y, and z coordinates.
# For the third time set, the displacement on each node is three
# times the value of its x, y, and z coordinates.
num_nodes = mesh.nodes.n_nodes
time1_array = coordinates_data
time2_array = 2.0 * coordinates_data
Expand Down
29 changes: 28 additions & 1 deletion examples/00-basic/04-basic-load-file.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# noqa: D400
"""
.. _ref_basic_load_file_example:

Expand Down Expand Up @@ -87,9 +88,35 @@
###############################################################################
# Make operations over the imported fields container
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Use this fields container:
# Use this fields container to get the minimum displacement:

min_max_op = dpf.operators.min_max.min_max_fc()
min_max_op.inputs.fields_container.connect(downloaded_fc_out)
min_field = min_max_op.outputs.field_min()
min_field.data

###############################################################################
# Compare the original and the downloaded fields container
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Subtract the two fields and plot an error map:
abs_error = (fc_out - downloaded_fc_out).eval()

divide = dpf.operators.math.component_wise_divide()
divide.inputs.fieldA.connect(fc_out - downloaded_fc_out)
divide.inputs.fieldB.connect(fc_out)
scale = dpf.operators.math.scale()
scale.inputs.field.connect(divide)
scale.inputs.ponderation.connect(100.)
rel_error = scale.eval()

###############################################################################
# Plot both absolute and relative error fields
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Note that the absolute error is bigger where the displacements are
# bigger, at the tip of the geometry.
# Instead, the relative error is similar accross the geometry since we
# are dividing by the displacements ``fc_out``.
# Both plots show errors that can be understood as zero due to machine precision
# (1e-12 mm for the absolute error and 1e-5% for the relative error).
mesh.plot(abs_error, scalar_bar_args={'title': "Absolute error [mm]"})
mesh.plot(rel_error, scalar_bar_args={'title': "Relative error [%]"})
4 changes: 3 additions & 1 deletion examples/00-basic/07-use_result_helpers.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# noqa: D400
"""
.. _ref_use_result_helpers:

Use result helpers to load custom data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``Result`` class, which is an instance created by the ``Model``, gives
The :class:`Result <ansys.dpf.core.results.Result>` class, which is an instance
created by the :class:`Model <ansys.dpf.core.model.Model>`, gives
access to helpers for requesting results on specific mesh and time scopings.
With these helpers, working on a custom spatial and temporal subset of the
model is straightforward.
Expand Down
4 changes: 3 additions & 1 deletion examples/00-basic/08-results_over_time_subset.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# noqa: D400
"""
.. _ref_results_over_time:

Scope results over custom time domains
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``Result`` class, which are instances created by the ``Model``, give
The :class:`Result <ansys.dpf.core.results.Result>` class, which are instances
created by the :class:`Model <ansys.dpf.core.model.Model>`, give
access to helpers for requesting results on specific mesh and time scopings.
With these helpers, working on a temporal subset of the
model is straightforward. In this example, different ways to choose the temporal subset to
Expand Down
14 changes: 8 additions & 6 deletions examples/00-basic/09-results_over_space_subset.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# noqa: D400
"""
.. _ref_results_over_space:

Scope results over custom space domains
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``Result`` class, which are instances created by the ``Model``, give
The :class:`Result <ansys.dpf.core.results.Result>` class, which are instances
created by the :class:`Model <ansys.dpf.core.model.Model>`, give
access to helpers for requesting results on specific mesh and time scopings.
With these helpers, working on a spatial subset of the model is straightforward.
In this example, different ways to choose the spatial subset to
Expand Down Expand Up @@ -90,7 +92,7 @@
###############################################################################
# Get the ``mesh_scoping`` of a named selection:

mesh_scoping = model.metadata.named_selection('_CM82')
mesh_scoping = model.metadata.named_selection("_CM82")
print(mesh_scoping)

###############################################################################
Expand All @@ -100,13 +102,13 @@

###############################################################################
# Equivalent to:
volume = model.results.elemental_volume.on_named_selection('_CM82')
volume = model.results.elemental_volume.on_named_selection("_CM82")

###############################################################################
# Equivalent to:
ns_provider = dpf.operators.scoping.on_named_selection(
requested_location=dpf.locations.elemental,
named_selection_name='_CM82',
named_selection_name="_CM82",
data_sources=model,
)
volume = model.results.elemental_volume(mesh_scoping=ns_provider).eval()
Expand Down Expand Up @@ -161,8 +163,8 @@

###############################################################################
elemental_stress = model.results.stress.on_location(dpf.locations.elemental)(
mesh_scoping=scopings_container) \
.eval()
mesh_scoping=scopings_container
).eval()
print(elemental_stress)

for field in elemental_stress:
Expand Down
25 changes: 12 additions & 13 deletions examples/00-basic/11-server_types.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# noqa: D400
"""
.. _ref_server_types_example:

Communicate in process or via gRPC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting with Ansys 2022 R2, PyDPF can communication either In Process or via gRPC
Starting with Ansys 2022 R2, PyDPF can communicate either via In Process or via gRPC
with DPF C++ core server (``Ans.Dpf.Grpc.exe``). To choose which type of
:class:`ansys.dpf.core.server_types.BaseServer` (object defining the type of communication
and the server instance to communicate with) to use, a
:class:`ansys.dpf.core.server_factory.ServerConfig` class should be used.
Until Ansys 2022R1, only gRPC communication using python module ansys.grpc.dpf is supported
Until Ansys 2022R1, only gRPC communication using python module ``ansys.grpc.dpf`` is supported
(now called :class:`ansys.dpf.core.server_types.LegacyGrpcServer`), starting with Ansys 2022 R2,
three types of servers are supported:

Expand Down Expand Up @@ -45,9 +46,7 @@
###############################################################################
# Equivalent to:

in_process_config = dpf.ServerConfig(
protocol=None, legacy=False
)
in_process_config = dpf.ServerConfig(protocol=None, legacy=False)
grpc_config = dpf.ServerConfig(
protocol=dpf.server_factory.CommunicationProtocols.gRPC, legacy=False
)
Expand All @@ -64,14 +63,14 @@
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

in_process_field = dpf.fields_factory.create_scalar_field(2, server=in_process_server)
in_process_field.append([1.], 1)
in_process_field.append([2.], 2)
in_process_field.append([1.0], 1)
in_process_field.append([2.0], 2)
grpc_field = dpf.fields_factory.create_scalar_field(2, server=grpc_server)
grpc_field.append([1.], 1)
grpc_field.append([2.], 2)
grpc_field.append([1.0], 1)
grpc_field.append([2.0], 2)
legacy_grpc_field = dpf.fields_factory.create_scalar_field(2, server=legacy_grpc_server)
legacy_grpc_field.append([1.], 1)
legacy_grpc_field.append([2.], 2)
legacy_grpc_field.append([1.0], 1)
legacy_grpc_field.append([2.0], 2)

print(in_process_field, type(in_process_field._server), in_process_field._server)
print(grpc_field, type(grpc_field._server), grpc_field._server)
Expand All @@ -87,8 +86,8 @@

dpf.SERVER_CONFIGURATION = dpf.AvailableServerConfigs.GrpcServer
grpc_field = dpf.fields_factory.create_scalar_field(2)
grpc_field.append([1.], 1)
grpc_field.append([2.], 2)
grpc_field.append([1.0], 1)
grpc_field.append([2.0], 2)
print(grpc_field, type(grpc_field._server), grpc_field._server)

# Go back to default config:
Expand Down
Loading