diff --git a/environment.yml b/environment.yml index 3befff22..0c7419eb 100644 --- a/environment.yml +++ b/environment.yml @@ -4,8 +4,8 @@ channels: dependencies: - python=3.10 - pip - - pandas - - numpy + - pandas>=2.0 + - numpy>=1.25 - matplotlib - jupyter - notebook diff --git a/etc/test_my_install.ipynb b/etc/test_my_install.ipynb index d3391d3f..ef6b9fc7 100644 --- a/etc/test_my_install.ipynb +++ b/etc/test_my_install.ipynb @@ -23,8 +23,12 @@ "import matplotlib.pyplot as plt\n", "\n", "print(\"python\",sys.version)\n", + "assert sys.version.startswith(\"3.10\")\n", "print(\"pandas\",pd.__version__)\n", + "assert pd.__version__.startswith(\"2\")\n", "print(\"numpy\",np.version.version)\n", + "assert np.version.version.startswith(\"1.2\")\n", + "assert int(np.version.version.split(\".\")[1]) >= 25\n", "#todo: add some asserts for min versions...\n", "\n", "import flopy\n", diff --git a/tutorials/clear_output_all.py b/tutorials/clear_output_all.py index 5f13fdda..6151a46b 100644 --- a/tutorials/clear_output_all.py +++ b/tutorials/clear_output_all.py @@ -1,15 +1,11 @@ import os dirs = [d for d in os.listdir(".") if os.path.isdir(d)] +notebook_count = 0 dirs.append(os.path.join("..","etc")) for d in dirs: nb_files = [os.path.join(d,f) for f in os.listdir(d) if f.lower().endswith(".ipynb")] for nb_file in nb_files: print("clearing",nb_file) os.system("jupyter nbconvert --ClearOutputPreprocessor.enabled=True --ClearMetadataPreprocessor.enabled=True --inplace {0}".format(nb_file)) - -# d = os.path.join("..","etc") -# if os.path.exists(d): -# nb_files = [os.path.join(d,f) for f in os.listdir(d) if f.lower().endswith(".ipynb")] -# for nb_file in nb_files: -# print("clearing",nb_file) -# os.system("jupyter nbconvert --ClearOutputPreprocessor.enabled=True --ClearMetadataPreprocessor.enabled=True --inplace {0}".format(nb_file)) + notebook_count += 1 +print(notebook_count," notebooks cleared") diff --git a/tutorials/part1_06_glm_response_surface/freyberg_glm_response_surface.ipynb b/tutorials/part1_06_glm_response_surface/freyberg_glm_response_surface.ipynb index bcf72a70..b02df430 100644 --- a/tutorials/part1_06_glm_response_surface/freyberg_glm_response_surface.ipynb +++ b/tutorials/part1_06_glm_response_surface/freyberg_glm_response_surface.ipynb @@ -68,7 +68,7 @@ "# a dir to hold a copy of the org model files\n", "tmp_d = os.path.join('freyberg_mf6')\n", "\n", - "runflag= False\n", + "runflag= True\n", "\n", "if runflag==False:\n", " print('Assuming PEST++SWP has bene run already and the folder with files is available')\n", @@ -201,7 +201,7 @@ "source": [ "Make a plot of the response surface for `hk1` (x-axis) and `rch0` (y-axis). The colored contours indicate the objective function value for each combination of these two parameters. \n", "\n", - "As you can see, a long eliptical \"trough\" of optimal values is formed (grey zone). Parameter combinations in this zone all result in equivalent levels of \"good fit\"." + "As you can see, a long eliptical \"trough of dispair\" of optimal values is formed. Parameter combinations in this zone all result in equivalent levels of \"good fit\" to the observation dataset. The trough of dispair is an example of non-uniqueness in graphically form. A problem that arises is while many combinations of recharge and HK can fit the observation dataset, the forecasts of interest model with \"calibrated\" model could be highly sensitive to the value of HK and/or recharge and single \"calibrated\" model cant represent this non-uniqueness under forecasting conditions. #uncertaintyanalysis" ] }, { @@ -283,7 +283,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "And plot it up again. Now we see the objective function surface funneling down to a single point. We have achieved a unique solution." + "And plot it up again. Now we see the objective function surface funneling down to a single point. We have achieved a unique solution. The \"trough of dispair\" has been the \"bowl of uniqueness\"! A clear demonstration of the value of unique and diverse data..." ] }, { @@ -301,7 +301,7 @@ "source": [ "# Understanding Lambdas\n", "\n", - "When used to undertake highly parameterized inversion, PESTPP-GLM implements theory and methodologies that are programmed into PEST. Some theory, employing matrices and vectors, is used to describe the linearized inverse problem on which so-called “gradient methods” are based. Through repeated linearization of the inverse problem over successive iterations, these methods achieve their purpose of model calibration, notwithstanding the nonlinear relationship that exists between model outputs and model parameters. \n", + "When used to undertake highly parameterized inversion, PESTPP-GLM implements theory and methodologies that are programmed into PEST. Some theory, employing matrices and vectors, is used to describe the linearized inverse problem on which so-called “gradient methods” are based. Through repeated linearization of the inverse problem over successive iterations, these methods achieve their purpose of model calibration, notwithstanding the nonlinear relationship that exists between model outputs and model parameters. It should also be noted that PESTPP-IES also implements the GLM solution, but an ensemble of parameter vectors. So the single trajectory below can be thought of as one of the many tradjectories that ensemble of parameter vectors take.\n", "\n", "Nonlinear model behaviour is also accommodated by introducing a so-called \"Marquardt lambda\" to these equations. Employing a nonzero lambda tweaks the direction of parameter improvement so that it is more aligned with the objective function gradient. This increases the efficiency of early iterations of the inversion process when implemented in conjunction with a nonlinear model.\n", "\n", @@ -311,7 +311,9 @@ "\n", "However, as the objective function minimum is approached, the process becomes more eficient if smaller lambdas are used. This avoids the phenomenon known as \"hemstitching\", in which parameter upgrades jump-across small, thin \"troughs\" in the objective function surface. \n", "\n", - "See the PEST Book (Doherty, 2015) for more details.\n" + "Note again, the effect of lambda on the parameter upgrade is the same in PESTPP-IES.\n", + "\n", + "See the PEST Book (Doherty, 2015) and the PEST++ users manual for more details.\n" ] }, { diff --git a/tutorials/part1_11_local_and_global_sensitivity/freyberg_2_global_sensitivity.ipynb b/tutorials/part1_11_local_and_global_sensitivity/freyberg_2_global_sensitivity.ipynb index 03bbe319..ecb3f42c 100644 --- a/tutorials/part1_11_local_and_global_sensitivity/freyberg_2_global_sensitivity.ipynb +++ b/tutorials/part1_11_local_and_global_sensitivity/freyberg_2_global_sensitivity.ipynb @@ -248,8 +248,26 @@ "metadata": {}, "outputs": [], "source": [ + "df = df.loc[df.sen_mean_abs>1e-6,:]\n", "df.loc[:,[\"sen_mean_abs\",\"sen_std_dev\"]].plot(kind=\"bar\", figsize=(13,4))\n", - "plt.yscale('log');" + "#ax = plt.gca()\n", + "#ax.set_ylim(1,ax.get_ylim()[1]*1.1)\n", + "plt.yscale('log');\n", + "fig,ax = plt.subplots(1,1,figsize=(13,8))\n", + "tmp_df = df\n", + "ax.scatter(tmp_df.sen_mean_abs,tmp_df.sen_std_dev,marker=\"^\",s=20,c=\"r\")\n", + "tmp_df = tmp_df.iloc[:8]\n", + "for x,y,n in zip(tmp_df.sen_mean_abs,tmp_df.sen_std_dev,tmp_df.index):\n", + " ax.text(x,y,n)\n", + "mx = max(ax.get_xlim()[1],ax.get_ylim()[1])\n", + "mn = min(ax.get_xlim()[0],ax.get_ylim()[0])\n", + "ax.plot([mn,mx],[mn,mx],\"k--\")\n", + "ax.set_ylim(mn,mx)\n", + "ax.set_xlim(mn,mx)\n", + "ax.grid()\n", + "ax.set_ylabel(\"$\\\\sigma$\")\n", + "ax.set_xlabel(\"$\\\\mu^*$\")\n", + "plt.tight_layout()\n" ] }, { @@ -307,7 +325,21 @@ " tmp_df = df_pred_sen.loc[df_pred_sen.observation_name==forecast].sort_values(by='sen_mean_abs', ascending=False)\n", " tmp_df.plot(x=\"parameter_name\",y=[\"sen_mean_abs\",\"sen_std_dev\"],kind=\"bar\", figsize=(13,2.5))\n", " plt.title(forecast)\n", - " plt.yscale('log');" + " plt.yscale('log');\n", + " fig,ax = plt.subplots(1,1,figsize=(13,8))\n", + " ax.scatter(tmp_df.sen_mean_abs,tmp_df.sen_std_dev,marker=\"^\",s=20,c=\"r\")\n", + " tmp_df = tmp_df.iloc[:8]\n", + " for x,y,n in zip(tmp_df.sen_mean_abs,tmp_df.sen_std_dev,tmp_df.parameter_name):\n", + " ax.text(x,y,n)\n", + " mx = max(ax.get_xlim()[1],ax.get_ylim()[1])\n", + " mn = min(ax.get_xlim()[0],ax.get_ylim()[0])\n", + " ax.plot([mn,mx],[mn,mx],\"k--\")\n", + " ax.set_ylim(mn,mx)\n", + " ax.set_xlim(mn,mx)\n", + " ax.grid()\n", + " ax.set_ylabel(\"$\\\\sigma$\")\n", + " ax.set_xlabel(\"$\\\\mu^*$\")\n", + " plt.tight_layout()\n" ] }, { @@ -318,6 +350,13 @@ "\n", "As we saw above for parameters, once again σ is very high (for almost all parameters...). This suggests either non-linearity and/or parameter interactions. Relying on linear methods for uncertainty analysis is therefore compromised. Ideally we should employ non-linear methods, as will be discussed in the subsequent tutorial." ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/tutorials/part2_06_ies/freyberg_ies_1_basics.ipynb b/tutorials/part2_06_ies/freyberg_ies_1_basics.ipynb index b6790be2..a3dfbf4d 100644 --- a/tutorials/part2_06_ies/freyberg_ies_1_basics.ipynb +++ b/tutorials/part2_06_ies/freyberg_ies_1_basics.ipynb @@ -737,9 +737,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Ruh roh! The posterior isn't covering the correct values for some of forecasts. Major bad times. \n", + "Ruh roh! The posterior isn't covering the correct values (with lots of realizations) for some of forecasts. #badtimes. \n", "\n", - "But hold on! The prior does (nearly) cover the true values for all forecasts. So that implies there is somewhere between the prior and posterior we have now, which is optimal with respect to all forecasts. Hmmm...so this means that history matching made our prediction worse. We have incurred forecast-sensitive bias through the parameter adjustment process. How can we fit historical data so well but get the \"wrong\" answer for some of the forecasts?\n", + "But hold on! The prior does (nearly) cover the true values for all forecasts. So that implies there is somewhere between the prior and posterior we have now, which is optimal with respect to all forecasts. Hmmm...so this means that history matching to a high level of fit made our prediction worse. We have incurred forecast-sensitive bias through the parameter adjustment process. How can we fit historical data so well but get the \"wrong\" answer for some of the forecasts?\n", "\n", "> **Important Aside**! When you are using an imperfect model (compared to the truth), the link between a \"good fit\" and robust forecast is broken. A good fit does not mean a good forecaster! This is particularily the case for forecasts that are sensitive to combinations of parameters that occupy the history-matching null space (see [Doherty and Moore (2020)](https://s3.amazonaws.com/docs.pesthomepage.org/documents/model_complexity_monograph.pdf) for a discussion of these concepts). In other words, forecasts which rely on (combinations of) parameters that are not informed by available observation data. (In our case, an example is the \"headwater\" forecast.)\n", "\n", @@ -826,7 +826,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Interesting! That's a bit better. What have we done? We've accepted \"more uncertainty\" for a reduced propensity of inducing forecast bias. Perhaps if we had more realizations we would have gotten a wider sample of the posterior? But even if we hadn't failed to capture the truth, in the real-world how would we know? So...should we just stick with prior? (Assuming the prior is adequately described...) Feeling depressed yet? Worry not, in our next tutorial we will introduce some coping strategies. \n", + "Interesting! That's better for some forecasts, not for others. What have we done? We've accepted \"more uncertainty\" for a reduced propensity of inducing forecast bias. Perhaps if we had more realizations we would have gotten a wider sample of the posterior? But even if we hadn't failed to capture the truth, in the real-world how would we know? So...should we just stick with prior? (Assuming the prior is adequately described...) Feeling depressed yet? Worry not, in our next tutorial we will introduce some coping strategies. \n", "\n", "In summary, we have learnt:\n", " - How to configure and run PESTPP-IES\n", @@ -904,9 +904,9 @@ "metadata": {}, "outputs": [], "source": [ - "pr_oe = pyemu.ObservationEnsemble.from_csv(pst=pst,filename=os.path.join(m_d,\"freyberg_mf6.0.obs.csv\"))\n", - "pt_oe = pyemu.ObservationEnsemble.from_csv(pst=pst,filename=os.path.join(m_d,\"freyberg_mf6.{0}.obs.csv\".format(pst.control_data.noptmax)))\n", - "noise = pyemu.ObservationEnsemble.from_csv(pst=pst,filename=os.path.join(m_d,\"freyberg_mf6.obs+noise.csv\"))" + "pr_oe_more = pyemu.ObservationEnsemble.from_csv(pst=pst,filename=os.path.join(m_d,\"freyberg_mf6.0.obs.csv\"))\n", + "pt_oe_more = pyemu.ObservationEnsemble.from_csv(pst=pst,filename=os.path.join(m_d,\"freyberg_mf6.{0}.obs.csv\".format(pst.control_data.noptmax)))\n", + "noise_more = pyemu.ObservationEnsemble.from_csv(pst=pst,filename=os.path.join(m_d,\"freyberg_mf6.obs+noise.csv\"))" ] }, { @@ -915,8 +915,8 @@ "metadata": {}, "outputs": [], "source": [ - "fig = plot_forecast_hist_compare(pt_oe=pt_oe_iter,pr_oe=pr_oe,\n", - " last_pt_oe=pt_oe,last_prior=pr_oe\n", + "fig = plot_forecast_hist_compare(pt_oe=pt_oe_more,pr_oe=pr_oe_more,\n", + " last_pt_oe=pt_oe_iter,last_prior=pr_oe\n", " )" ] }, @@ -924,8 +924,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "having more realizations has two benefits: more samples for uncertainty analysis and better resolution of the empirical first-order relations between parameters and observations. In this case, these two combined effects have helped us (nearly) bracket the true value for each forecast - yeh! So always use a many realizations as you can tolerate!" + "having more realizations has two benefits: more samples for uncertainty analysis and better resolution of the empirical first-order relations between parameters and observations. In this case, these two combined effects have helped us better bracket the true value for each forecast - yeh! So always use a many realizations as you can tolerate!" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/tutorials/part2_06_ies/freyberg_ies_2_localization.ipynb b/tutorials/part2_06_ies/freyberg_ies_2_localization.ipynb index 437ea175..7153f1c2 100644 --- a/tutorials/part2_06_ies/freyberg_ies_2_localization.ipynb +++ b/tutorials/part2_06_ies/freyberg_ies_2_localization.ipynb @@ -275,7 +275,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As you recall, the posterior fails to capture the truth for some forecass:" + "As you recall, the posterior fails to capture the truth with a good number of realizations for some forecass:" ] }, { @@ -604,7 +604,7 @@ "# up to 180 days into the past; not in the future. \n", "#Parameters that are beyond the historic period are not informed by observations\n", "nz_obs = pst.observation_data.loc[pst.nnz_obs_names,:]\n", - "cutoff = 180\n", + "cutoff = 10000 # you can experiement with this number to control the \"system memory\" between forcings and water levels\n", "times = nz_obs.time.unique()\n", "times.sort()\n", "for time in times:\n", @@ -626,9 +626,9 @@ "fig,ax = plt.subplots(1,1,figsize=(20,20))\n", "ax.imshow(loc.values)\n", "ax.set_xticks(np.arange(loc.shape[1]))\n", - "ax.set_xticklabels(loc.columns.values,rotation=90)\n", + "ax.set_xticklabels(loc.columns.values,rotation=90,fontsize=18)\n", "ax.set_yticks(np.arange(loc.shape[0]))\n", - "_ = ax.set_yticklabels(loc.index.values)" + "_ = ax.set_yticklabels(loc.index.values,fontsize=18)" ] }, { @@ -693,7 +693,7 @@ "metadata": {}, "outputs": [], "source": [ - "pst.pestpp_options[\"ies_num_reals\"] = 30 # in theory, with localization we can get by with less reals...lets see!" + "pst.pestpp_options[\"ies_num_reals\"] = 50 # in theory, with localization we can get by with less reals... feel like #livingdangerously?" ] }, { @@ -934,7 +934,7 @@ "outputs": [], "source": [ "# the cutoff distance\n", - "loc_dist = 5000.0\n", + "loc_dist = 5000.0 # arbitrary!\n", "# prepare a set of adjustable parameter names\n", "sadj = set(pst.adj_par_names)\n", "#select only spatial params to avoid applying to layer-wide multiplier parameters\n", @@ -1017,7 +1017,7 @@ "source": [ "A final consideration. \n", "\n", - "Through localization, a complex parameter estimation problem can be turned into a series of independent parameter estimation problems. If large numbers of parameters are being adjusted, the parameter upgrade calculation process for a given lambda will require as many truncated SVD solves as there are adjustable parameters. This can require considerable numerical effort. To overcome this problem, the localized upgrade solution process in PESTP++IES has been multithreaded; this is possible in circumstances such as these where each local solve is independent of every other local solve. The use of multiple threads is invoked through the `ies_num_threads()` control variable. It should be noted that the optimal number of threads to use is problem-specific. Furthermore, it should not exceed the number of physical cores of the host machine on which the PEST++IES master instance is running.\n", + "Through localization, a complex parameter estimation problem can be turned into a series of independent parameter estimation problems. If large numbers of parameters are being adjusted, the parameter upgrade calculation process for a given lambda will require as many truncated SVD solves as there are adjustable parameters. This can require considerable numerical effort. To overcome this problem, the localized upgrade solution process in PESTPP-IES has been multithreaded; this is possible in circumstances such as these where each local solve is independent of every other local solve. The use of multiple threads is invoked through the `ies_num_threads()` control variable. It should be noted that the optimal number of threads to use is problem-specific. Furthermore, it should not exceed the number of physical cores of the host machine on which the PESTPP-IES master instance is running.\n", "\n", "However, the fully localized solve is still sssslllloooooowwwwwww. So if you have heaps of parameters (>30,000 say) it may actually be faster to use more realizations rather than use localization in terms of wall time - more realizations will over come the issues related to spurious correlation simply by having more samples to calculate the empirical derivatives with...but this depends on the runtime of the forward model as well. As usual, the answer is: \"It depends\" - haha!\n", "\n", @@ -1109,7 +1109,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this case, fits are similar to with the temporal localization:" + "In this case, fits are than with temporal localization:" ] }, { @@ -1125,7 +1125,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "And the ever important foreacasts. Again, a bit more variance in the null-space dependent forecasts (i.e. particle travel time)." + "And the ever important foreacasts. Again, a bit more variance in the null-space dependent forecasts (i.e. particle travel time) and a bit less variance in the more solution-space dependent forecasts #winning." ] }, { @@ -1141,9 +1141,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## PEST++IES with Automatic Adaptive Localization\n", + "## PESTPP-IES with Automatic Adaptive Localization\n", "\n", - "PEST++IES includes functionality for automatic localization. In practice, this form of localization doesn't resolve the level of localization that more rigorous explicit localization that you get through a localization matrix. However, its better than no localization at all. \n", + "PESTPP-IES includes functionality for automatic localization. In practice, this form of localization doesn't resolve the level of localization that more rigorous explicit localization that you get through a localization matrix. However, its better than no localization at all. \n", "\n", "A localization matrix supplied by the user can be used in combination with automatic adaptive localization (autoadaloc). When doing so, autoadaloc process is restricted to to the allowed parameter-to-observation relations in the user specified localization matrix. The automated process will only ever adjust values in the localization matrix downwards (i.e. decrease the correlation coefficients).\n", "\n", @@ -1158,7 +1158,9 @@ "metadata": {}, "outputs": [], "source": [ - "#pst.pestpp_options.pop(\"ies_localizer\") #should you wish to try autoadaloc on its onw, simply drop the loc matrix\n", + "pst.pestpp_options.pop(\"ies_localizer\") #should you wish to try autoadaloc on its own, simply drop the loc matrix\n", + "# or use the temporal localizer and let the AAD process work within those rules:\n", + "pst.pestpp_options[\"ies_localizer\"] = \"loc.mat\n", "pst.pestpp_options[\"ies_autoadaloc\"] = True" ] }, @@ -1185,7 +1187,8 @@ "metadata": {}, "outputs": [], "source": [ - "pst.pestpp_options[\"ies_autoadaloc_sigma_dist\"] = 1" + "pst.pestpp_options[\"ies_autoadaloc_sigma_dist\"] = 1\n", + "pst.pestpp_options[\"ies_num_threads\"] = 4" ] }, { @@ -1254,10 +1257,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Thus far we have implemented localization, a strategy to tackle spurious parameter-to-observation correlation. In doing so we reduce the potential for \"ensemble colapse\", a fancy term that means an \"underestimate of forecast uncertainty caused by artificial parameter-to-observation relations\". This solves history-matching induced through using ensemble based methods, but it does not solve a (the?) core issue - trying to \"perfectly\" fit data with an imperfect model will induce bias. \n", + "Not too shabby!\n", + "\n", + "Thus far we have implemented localization, a strategy to tackle spurious parameter-to-observation correlation. In doing so we reduce the potential for \"ensemble collapse\", a fancy term that means an \"underestimate of forecast uncertainty caused by artificial parameter-to-observation relations\". This solves history-matching induced through using ensemble based methods, but it does not solve a (the?) core issue - trying to \"perfectly\" fit data with an imperfect model will induce bias. \n", "\n", "Now, as we have seen, for some forecasts this is not a huge problem (these are data-driven forecasts, which are well informed by available observation data). For others, it is (these are the forecasts which are influenced by parameter combinations in the null space, that are not informed by observation data). But when undertaking modelling in the real world, we will rarely know where our forecast lies on that spectrum (probably somewhere in the middle...). So, better safe than sorry." ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/tutorials/part2_07_da/freyberg_da_prep.ipynb b/tutorials/part2_07_da/freyberg_da_prep.ipynb index ffe95b63..59b9b0d0 100644 --- a/tutorials/part2_07_da/freyberg_da_prep.ipynb +++ b/tutorials/part2_07_da/freyberg_da_prep.ipynb @@ -985,7 +985,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# OMG that was brutal" + "# OMG that was brutal\n", + "\n", + "Easily one of the worst notebooks ever made - all mechanics!" ] }, { @@ -1084,6 +1086,20 @@ " plt.tight_layout()" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, { "cell_type": "code", "execution_count": null, diff --git a/tutorials/part2_07_da/freyberg_da_run.ipynb b/tutorials/part2_07_da/freyberg_da_run.ipynb index 5623078a..eb8da5fa 100644 --- a/tutorials/part2_07_da/freyberg_da_run.ipynb +++ b/tutorials/part2_07_da/freyberg_da_run.ipynb @@ -324,6 +324,14 @@ "metadata": {}, "outputs": [], "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ce363c05-1617-40ab-82c7-babaf5844425", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": {