Skip to content

Commit

Permalink
Deploying to gh-pages from @ 0c9dc3c 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
holl- committed Jul 28, 2024
1 parent 1288116 commit 475b8a6
Show file tree
Hide file tree
Showing 10 changed files with 85 additions and 85 deletions.
6 changes: 3 additions & 3 deletions Advantages_Data_Types.html
Original file line number Diff line number Diff line change
Expand Up @@ -15151,10 +15151,10 @@ <h1 id="Why-%CE%A6ML-has-Precision-Management">Why &#934;<sub>ML</sub> has Preci


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-07-13 14:52:45.983420: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-13 14:52:46.021857: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
<pre>2024-07-28 19:53:23.652461: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-28 19:53:23.690755: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-13 14:52:46.809915: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-07-28 19:53:24.467300: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion Convert.html
Original file line number Diff line number Diff line change
Expand Up @@ -15617,7 +15617,7 @@ <h2 id="Converting-Tensors">Converting Tensors<a class="anchor-link" href="#Conv


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-07-13 14:52:58.207239: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-07-28 19:53:35.954069: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion Examples.html
Original file line number Diff line number Diff line change
Expand Up @@ -15215,7 +15215,7 @@ <h3 id="Training-an-MLP">Training an MLP<a class="anchor-link" href="#Training-a


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-07-13 14:53:13.375528: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-07-28 19:53:51.350738: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion Introduction.html
Original file line number Diff line number Diff line change
Expand Up @@ -15349,7 +15349,7 @@ <h2 id="Usage-without-%CE%A6ML's-Tensors">Usage without &#934;<sub>ML</sub>'s Te


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-07-13 14:53:30.053480: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-07-28 19:54:08.482855: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand Down
14 changes: 7 additions & 7 deletions Linear_Solves.html
Original file line number Diff line number Diff line change
Expand Up @@ -16021,13 +16021,13 @@ <h2 id="Obtaining-Additional-Information-about-a-Solve">Obtaining Additional Inf


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>factor_ilu: auto-selecting iterations=1 (eager mode) for matrix <span class="ansi-blue-intense-fg">(2.000, 0.000)</span>; <span class="ansi-blue-intense-fg">(0.000, 1.000)</span> <span class="ansi-green-intense-fg">(b_vecᶜ=2, ~b_vecᵈ=2)</span> (DEBUG), 2024-07-13 14:53:45,582n
<pre>factor_ilu: auto-selecting iterations=1 (eager mode) for matrix <span class="ansi-blue-intense-fg">(2.000, 0.000)</span>; <span class="ansi-blue-intense-fg">(0.000, 1.000)</span> <span class="ansi-green-intense-fg">(b_vecᶜ=2, ~b_vecᵈ=2)</span> (DEBUG), 2024-07-28 19:54:23,863n

TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False)] (DEBUG), 2024-07-13 14:53:45,597n
TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False)] (DEBUG), 2024-07-28 19:54:23,878n

Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;,) containing 1 native tensors (DEBUG), 2024-07-13 14:53:45,597n
Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;,) containing 1 native tensors (DEBUG), 2024-07-28 19:54:23,879n

Performing linear solve scipy-CG with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-07-13 14:53:45,601n
Performing linear solve scipy-CG with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-07-28 19:54:23,883n

</pre>
</div>
Expand Down Expand Up @@ -16155,11 +16155,11 @@ <h2 id="Linear-Solves-with-Native-Tensors">Linear Solves with Native Tensors<a c


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False), ((2, 2), False)] (DEBUG), 2024-07-13 14:53:45,624n
<pre>TorchScript -&gt; run compiled forward &#39;_matrix_solve_forward&#39; with args [((2,), False), ((2, 2), False)] (DEBUG), 2024-07-28 19:54:23,907n

Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;, &#39;matrix&#39;) containing 2 native tensors (DEBUG), 2024-07-13 14:53:45,625n
Running forward pass of custom op forward &#39;_matrix_solve_forward&#39; given args (&#39;y&#39;, &#39;matrix&#39;) containing 2 native tensors (DEBUG), 2024-07-28 19:54:23,907n

Performing linear solve auto with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-07-13 14:53:45,628n
Performing linear solve auto with tolerance <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (rel), <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">1e-05</span> (abs), max_iterations=<span class="ansi-blue-intense-fg">1000</span> with backend torch (DEBUG), 2024-07-28 19:54:23,911n

</pre>
</div>
Expand Down
18 changes: 9 additions & 9 deletions N_Dimensional.html
Original file line number Diff line number Diff line change
Expand Up @@ -15250,7 +15250,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-blue-intense-fg">(0.564, 0.519, 0.587, 0.783, 0.531)</span> along <span class="ansi-green-intense-fg">xˢ</span></pre>
<pre><span class="ansi-blue-intense-fg">(0.097, 0.693, 0.524, 0.273, 0.902)</span> along <span class="ansi-green-intense-fg">xˢ</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15289,7 +15289,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-blue-intense-fg">0.549 ± 0.122</span> <span class="ansi-white-fg">(4e-01...8e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-blue-intense-fg">0.456 ± 0.093</span> <span class="ansi-white-fg">(3e-01...6e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15328,7 +15328,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-blue-intense-fg">0.502 ± 0.117</span> <span class="ansi-white-fg">(1e-01...9e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-blue-intense-fg">0.502 ± 0.114</span> <span class="ansi-white-fg">(8e-02...9e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15479,7 +15479,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-blue-intense-fg">((2.983108+0j), (-0.71015877+0.40726614j), (-0.16978076+0.22251278j), (-0.16978076-0.22251278j), (-0.71015877-0.40726614j))</span> along <span class="ansi-green-intense-fg">xˢ</span> <span class="ansi-yellow-intense-fg">complex64</span></pre>
<pre><span class="ansi-blue-intense-fg">((2.4896204+0j), (-0.17790557+0.16362576j), (1.1701412-0.4473223j), (1.1701412+0.4473223j), (-0.17790557-0.16362576j))</span> along <span class="ansi-green-intense-fg">xˢ</span> <span class="ansi-yellow-intense-fg">complex64</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15518,7 +15518,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 4.944019794464111</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=3, yˢ=3)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 4.102997779846191</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15557,7 +15557,7 @@ <h2 id="Grids">Grids<a class="anchor-link" href="#Grids">&#182;</a></h2><p>Grids


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 2056.690673828125</span></pre>
<pre><span class="ansi-green-intense-fg">(xˢ=16, yˢ=16, zˢ=16)</span> <span class="ansi-yellow-intense-fg">complex64</span> <span class="ansi-blue-intense-fg">|...| &lt; 2054.987060546875</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15672,7 +15672,7 @@ <h2 id="Dimensions-as-Components">Dimensions as Components<a class="anchor-link"


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.252 ± 0.216</span> <span class="ansi-white-fg">(0e+00...5e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.411 ± 0.353</span> <span class="ansi-white-fg">(0e+00...9e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15711,7 +15711,7 @@ <h2 id="Dimensions-as-Components">Dimensions as Components<a class="anchor-link"


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.392 ± 0.321</span> <span class="ansi-white-fg">(0e+00...8e-01)</span></pre>
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.394 ± 0.279</span> <span class="ansi-white-fg">(0e+00...8e-01)</span></pre>
</div>

</div>
Expand Down Expand Up @@ -15750,7 +15750,7 @@ <h2 id="Dimensions-as-Components">Dimensions as Components<a class="anchor-link"


<div class="jp-RenderedText jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/plain">
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.648 ± 0.424</span> <span class="ansi-white-fg">(0e+00...1e+00)</span></pre>
<pre><span class="ansi-green-intense-fg">(othersⁱ=4, pointsⁱ=4)</span> <span class="ansi-blue-intense-fg">0.400 ± 0.270</span> <span class="ansi-white-fg">(0e+00...8e-01)</span></pre>
</div>

</div>
Expand Down
16 changes: 8 additions & 8 deletions Networks.html

Large diffs are not rendered by default.

34 changes: 17 additions & 17 deletions Performance.html
Original file line number Diff line number Diff line change
Expand Up @@ -15212,8 +15212,8 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>Φ-ML + torch JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.20320182</span>
Φ-ML + torch execution average: 0.034612301737070084 +- 0.005505578592419624
<pre>Φ-ML + torch JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.20953292</span>
Φ-ML + torch execution average: 0.034011512994766235 +- 0.002126829931512475
</pre>
</div>
</div>
Expand All @@ -15233,8 +15233,8 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>Φ-ML + jax JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.16282232</span>
Φ-ML + jax execution average: 0.011867190711200237 +- 0.0008712700218893588
<pre>Φ-ML + jax JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">0.16060515</span>
Φ-ML + jax execution average: 0.011955487541854382 +- 0.0008933879435062408
</pre>
</div>
</div>
Expand All @@ -15244,7 +15244,7 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>2024-07-13 14:54:23.111479: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<pre>2024-07-28 19:55:01.586476: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</pre>
</div>
</div>
Expand All @@ -15254,8 +15254,8 @@ <h2 id="Performance-and-JIT-compilation">Performance and JIT-compilation<a class


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>Φ-ML + tensorflow JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">13.247753</span>
Φ-ML + tensorflow execution average: 0.054097965359687805 +- 0.003578002331778407
<pre>Φ-ML + tensorflow JIT compilation: <span class="ansi-yellow-intense-fg">float64</span> <span class="ansi-blue-intense-fg">14.0247</span>
Φ-ML + tensorflow execution average: 0.05318400636315346 +- 0.0023054108023643494
</pre>
</div>
</div>
Expand Down Expand Up @@ -15361,8 +15361,8 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>jax JIT compilation: 0.13371273299998165
jax execution average: 0.01018896040404034
<pre>jax JIT compilation: 0.14152707099998452
jax execution average: 0.010202408323232479
</pre>
</div>
</div>
Expand Down Expand Up @@ -15443,11 +15443,11 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>/tmp/ipykernel_2662/3571425526.py:12: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
<pre>/tmp/ipykernel_2613/3571425526.py:12: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
dist = torch.sqrt(torch.maximum(torch.sum(deltas ** 2, -1), torch.tensor(1e-4))) # eps=1e-4 to avoid NaN during backprop of sqrt
/tmp/ipykernel_2662/3571425526.py:20: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
/tmp/ipykernel_2613/3571425526.py:20: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
x_inc_contrib = torch.sum(torch.where(has_impact.unsqueeze(-1), torch.minimum(impact_time.unsqueeze(-1) - dt, torch.tensor(0.0)) * impulse, torch.tensor(0.0)), -2)
/tmp/ipykernel_2662/3571425526.py:22: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
/tmp/ipykernel_2613/3571425526.py:22: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
v += torch.sum(torch.where(has_impact.unsqueeze(-1), impulse, torch.tensor(0.0)), -2)
</pre>
</div>
Expand All @@ -15458,8 +15458,8 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>torch JIT compilation: 0.03671777620911598
torch execution average: 0.03342506289482117
<pre>torch JIT compilation: 0.036918748170137405
torch execution average: 0.03333647921681404
</pre>
</div>
</div>
Expand All @@ -15469,7 +15469,7 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr">
<pre>/tmp/ipykernel_2662/3571425526.py:45: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
<pre>/tmp/ipykernel_2613/3571425526.py:45: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
print(f&#34;torch execution average: {torch.mean(torch.tensor(dt_torch[2:]))}&#34;)
</pre>
</div>
Expand Down Expand Up @@ -15545,8 +15545,8 @@ <h2 id="Native-Implementations">Native Implementations<a class="anchor-link" hre


<div class="jp-RenderedText jp-OutputArea-output" data-mime-type="text/plain">
<pre>tensorflow JIT compilation: 0.37130433320999146
tensorflow execution average: 0.038255028426647186
<pre>tensorflow JIT compilation: 0.20772871375083923
tensorflow execution average: 0.038236696273088455
</pre>
</div>
</div>
Expand Down
Loading

0 comments on commit 475b8a6

Please sign in to comment.