Skip to content

Commit

Permalink
PRA plugin machine milestone adapted new data object (#626)
Browse files Browse the repository at this point in the history
* added test files

* edits

* test for expanded ET

* Non-multilevel Optimizers reworked (#507)

* optimizers passing with the new DataObjects except multilevel

* Alfoa/data object rework (#509)

* fixed CustomSampler

* fixed test_output for new version of matplotlib

* added files

* edits

* Alfoa/test fix (#525)

* added _reassignSampledVarsPbToFullyCorrVars to Sampler base class.

* Script now offers a flag to change ignored files (#508)

-e can be followed by a comma separated list of directories or files
which will be ignored. Old behavior is to only ignore .git. New default
is .git and raven.

* fixed run_tests

* Framework in makefile (#520)

* Added hit to all requirements and clean

* Now bindings instead of binary

* Moves hit.so to usable location

* fixed another parsing error in tests file for framework/PostProcessors/TemporalDataMiningPostProcessor/Clustering.GaussianMixture

* added new ID of moose with fix on TestHarness

* fixed make file for windows

* rework: test harness fix (#526)

* fixed test harness to remove output files

* Asynchronous History Sampling (#511)

* fixed adding histories of different lengths through addRealization

* fixed restart MC typo

* Fix data mining for the new data object (#512)

* edit

* edits

* initial fix for data mining pp

* fix plot and pca

* fix plot and HS2PS

* fix temporal data mining pp

* fix time-dep data mining based on dimensionality reduction

* fix dataobjectfilter pp and datamining pp

* regold tests, because label switches, and change of prints

* addition regold

* regold pca

* fix comments

* fix more comments

* fix time-dep basic statistics pp and history set sync pp (#515)

* fix time-dep basic statistics pp and history set sync pp

* add regold files

* fixed internal parallel test for PP (#521)

* update user manual for several post-processors (#523)

* update documents for basic statistics pp

* update documents for metric and cross validation pp

* update documents for ImportanceRank pp

* update documents for external post-processor

* update documents for data mining post processor

* resolve comments

* rework: raven running raven (#522)

* Script now offers a flag to change ignored files (#508)

-e can be followed by a comma separated list of directories or files
which will be ignored. Old behavior is to only ignore .git. New default
is .git and raven.

* test added

* improving analytic test doc

* Generalized typechecking (#524)

* isAType and unit tests

* converted dataobjects to use mathutils typechecking

* edits

* regold historysetsyn tests (#529)

* starting working on DET

* Fix clustering with DTW and ETImporter PP (#531)

* fix solution export in Temporal data mining

* fix ETImporter and regold AffinityPropogation

* fix dtw

* External Model collection fix (#534)

* fixed external model and added a complexity test

* fixed external model to catch more variables

* Alfoa/incorrect test files (#535)

* fixed inccorect tests

* fixed custom mode

* removed unuseful stuff

* added regold file

* fixed AdaptiveBatch test files

* fixed test_Custom_Sampler_DataObject.xml

* fixes (#533)

*Fix interface pp for risk measure

* Fix xsd and PostProcess in raven tutorial (#536)

* fix postprocess test in the user_guide

* fix xsd

* comment out the postprocess output

* fixed gold file for getpot test (#539)

* rework: Multilevel optimizers (#537)

* fixed external model and added a complexity test

* fixed external model to catch more variables

* fixed pathing

* fixed infinite missing file

* optimizer tests working

* rework: CSV fixes and improvements (#538)

* loading from dict or CSV now extends existing data instead of replacing it

* fixed multiple history csv loading

* added failure check test

* added files

* edits

* initial implementation of markov categorical distribution

* rework: added libs to conda setup (#542)

* added xarray, netcdf4

* trying to fix moosebuild failure by adding seperator keyword in csv reader in tester

* edits

* edits

* Deep Learning from Scikit-Learn (#547)

* add Multi-layer perceptron classifier and regressor into raven

* add tests for multi-layer perceptron neural network

* add manuals

* modify input files to avoid regold

* fix typos and update scikit-learn versions

* rework: Raven-runs-raven with ROM (#548)

* fixed time-dependent SKL ROM, added test

* slight regold in 5th sigfig for raven runs raven case

* rework: MOOSE submodule update (#550)

* update to latest moose master, includes test harness failure improvements

* compute the steady state probability

* reverted QA verson of sklearn to 0.18 (#552)

* edit

* add gold files, fix parser error

* update tests

* add python module for external model

* edits

* edits

* edits

* fixed rrr example plugin test (#558)

* fix library version for skl (#559)

* edits

* edits

* edits

* initial implement for DataClassifier postprocessor

* update functions

* update test

* edits

* update tests, fix typos, add gold files

* added paper

* add the capability to handle input with type of HistorySet

* edits

* add documents for DataClassifier pp

* edit

* edits

* Hierarchical and DET for new DataObject (#532)

* fixed a portion of enesemble model

* added check for consistency for shape in indexes

* added unit test for checking index/var shapes

* fixed custom sampler

* updated revision

* addressed Hier issue

* fixed Hierachical and DET tests (no adaptive yet)

* skiped a portion of the unit test since it is not finished yet

* fixed new modification in XSD

* addressed Paul and Conjiang's comments

* fixed MAAP5interfaceAHDETSampling and MAAP5interfaceADETSampling

* fixed framework/user_guide/ravenTutorial.singleRunPlot, framework/user_guide/ravenTutorial.singleRunSubPlot, framework/user_guide/ravenTutorial.RomLoad

* fixed user_guide heavy tests

* fixed start

* added ext report codes

* addressed Congjian comments (1of2)

* added documentation for hirarchical flag

* rework update cashflow (#560)

* updated CashFlow submodule id

* Code Clean Up (#567)

* clean up reassignSampledVarsPbToFullyCorrVars, because we add it to the Sampler base class

* move reassignPbWeightToCorrelatedVars to Sampler base class

* fix comments for PR#560

* fix pb in Monte Carlo and fix generic code interface test

* fix cluster tests (#566)

* fix cluster tests

* fix parallelPP test

* Alfoa/dataobject rework finalize ensemble (#565)

* Closes #541

* Update GenericCodeInterface.py

* fixed tester (#528)

* ensemble model pb weights for variables coming from functions

* fixed single-value-duplication error for SKL ROMs (#555)

* fixed single-value-duplication error

* fixed test framework/ensembleModelTests.testEnsembleModelWith2CodesAndAliasAndOptionalOutputs

* modified order of input output to avoid regolding

* Reducing DataObject Attribute Functionality (#278)

* Enabling the data attribute tests and fixing the operators for PointSets. TODO: Break the data_attributes test down to be more granular and fix the outputPivotValue on the HistorySets.

* Splitting the test files for the DataObject attributes and correcting some malformations in the subsequent input files. TODO: Fix the attributes for the history set when operating from a Model.

* Fixing HistorySet data attribute test case to look for the correct file.

* Correcting attributions for data object tests. maljdan had only moved the files. The original tests were designed by others. TODO: verify if test results are valid or the result of incorrect gold files.

* Reducing the number of DataObjects needed in the shared suite of DataObject attribute tests.

* Regolding the DataObject HistorySet attributes files to respect the outputPivotVal specified for stories2.

* Picking up where I left off, trying to recall what modifications still need to be done to the HistorySet.

* Regolding a test case on data attributes, removing dead code from the HistorySet and updating some aspects of the PointSet.

* Removing data attribute feature set with explanation in comments. Cleaning old code.

* Regolding fixed test case.

* Reverting changes to ensemble test and accommodating unstructured inputs.

* addressed misunderstanding in HistorySet

* added HSToPSOperator PP

* added documentation for new interface

* finished new PP

* addressed first comments

* addressed Congjian's comments

* updated XSD

* moving ahead

* fixed test framework/ensembleModelTests.testEnsembleModelLinearThreadWithTimeSeries

* fixed framework/ensembleModelTests.testEnsembleModelLinearParallelWithOptimizer

* fixed framework/CodeInterfaceTests.DymolaTestTimeDepNoExecutableEnsembleModel

* fixed framework/PostProcessors/InterfacedPostProcessor.metadataUsageInInterfacePP

* fixed new test files coming from devel

* updated InterfacedPP HStoPSOperator

* fixed xsd

* added documenation for DataSet

* added conversion script from old HDF5 to new HDF5

* Update DataObjects.xsd

* remove white space

* Update database_data.tex

* Update postprocessor.tex

* removed unuseful __init__ in Melcor interface

* addressed Congjian's comments

* ok

* moving

* moving ahead

* ok

* moving

* aaaa

* ok

* a

* CSV printing speedup (#570)

* Closes #541

* Update GenericCodeInterface.py

* fixed

* fixed tester (#528)

* ensemble model pb weights for variables coming from functions

* fixed single-value-duplication error for SKL ROMs (#555)

* fixed single-value-duplication error

* xsd

* fixed type

* fixed test framework/ensembleModelTests.testEnsembleModelWith2CodesAndAliasAndOptionalOutputs

* modified order of input output to avoid regolding

* ok

* Reducing DataObject Attribute Functionality (#278)

* Enabling the data attribute tests and fixing the operators for PointSets. TODO: Break the data_attributes test down to be more granular and fix the outputPivotValue on the HistorySets.

* Splitting the test files for the DataObject attributes and correcting some malformations in the subsequent input files. TODO: Fix the attributes for the history set when operating from a Model.

* Fixing HistorySet data attribute test case to look for the correct file.

* Correcting attributions for data object tests. maljdan had only moved the files. The original tests were designed by others. TODO: verify if test results are valid or the result of incorrect gold files.

* Reducing the number of DataObjects needed in the shared suite of DataObject attribute tests.

* Regolding the DataObject HistorySet attributes files to respect the outputPivotVal specified for stories2.

* Picking up where I left off, trying to recall what modifications still need to be done to the HistorySet.

* Regolding a test case on data attributes, removing dead code from the HistorySet and updating some aspects of the PointSet.

* Removing data attribute feature set with explanation in comments. Cleaning old code.

* Regolding fixed test case.

* Reverting changes to ensemble test and accommodating unstructured inputs.

* addressed misunderstanding in HistorySet

* added HSToPSOperator PP

* added documentation for new interface

* finished new PP

* addressed first comments

* addressed Congjian's comments

* updated XSD

* moving ahead

* fixed test framework/ensembleModelTests.testEnsembleModelLinearThreadWithTimeSeries

* last one almost done

* fixed framework/ensembleModelTests.testEnsembleModelLinearParallelWithOptimizer

* fixed framework/CodeInterfaceTests.DymolaTestTimeDepNoExecutableEnsembleModel

* almost done

* fixed framework/PostProcessors/InterfacedPostProcessor.metadataUsageInInterfacePP

* fixed new test files coming from devel

* updated InterfacedPP HStoPSOperator

* fixed xsd

* added documenation for DataSet

* added conversion script from old HDF5 to new HDF5

* Update DataObjects.xsd

* remove white space

* Update database_data.tex

* testing printing

* reverted to_csv for ND dataset.  Need a good test for multiple-index dataset printing.

* added benchmark results for numpy case

* Rework Ensemble for Indexes (#571)

* got the test case working WITH picard iteration, now working to sort it out so picard is not used

* works without picard

* cleanup

* fix for single residual values

* order change for xsd sake

* added user guide entry, added some slight additional testing

* gold file

* adding stuff

* xsd fix

* stash for syncing

* added tips and tricks in docs

* cleanup

* some comments addressed

* changed all raven entities to use UpperCaseCapitalization in sentences

* ok

* try

* finished DMD

* edit ensemble test

* Alfoa/performance improvement ensemble model (#581)

* removed piclking of TargetEvaluation

* removed pickling of Optional Outputs and removed specialization in the Step for ensembleModel

* changed name of local target evaluation

* changed name of local target evaluation

* addressed Congjian's comments and all execpt one of Paul's ones

* fixed remove in assembler

* graph time dep

* graph time dep

* ET TD

* add missing files

* resolve comments

* edits

* Talbpaul/rework maxqsize (#584)

* Closes #541

* Update GenericCodeInterface.py

* fixed

* fixed tester (#528)

* ensemble model pb weights for variables coming from functions

* stash

* fixed failing tests by adding maxqueuesize back to 1

* test revisions added

* revision author name

* edits

* added temporary walltime for codes

* edits

* fixed conflicts (#595)

* fixed conflicts

* fixed typo

* fixed one of the categorical cases

* fixed restart

* library fixes

* fixed netcdf4 specification

* fixing numpy version again

* trying numpy 1.14

* numpy 1.11

* test for missing variables in restart added

* numpy 1.14.0

* 1.14 with inclusion in conda list

* Update existing_interfaces.tex

* restarting with more conda version checking

* Skipping ARMA reseed test

* [rework] ExternalXML in RAVEN Code Interface (#596)

* Closes #541

* Update GenericCodeInterface.py

* fixed

* fixed tester (#528)

* ensemble model pb weights for variables coming from functions

* cherry picking, test is not passing

* fixed merge for rework

* Optimizer inherits from Sampler (#600)

* Job Profiling (#586)

* implements job profiling

* review comments

* locking down xarray library versions

* library change

* pandas version lock

* shuffled libraries according to discussion, pinned netcdf4

* trying pip package specs

* changed the printing strategy of profiles (#601)

* changed the printing strategy of profiles

* Update Runner.py

* added constant reading into solution export, also added test to verbosity test

* removed debug prints

* dummy change to run tests

* modified spline...almost done

* moving forward for Crow

* ok

* almost done

* ok

* Add "long" data type compatability (#590)

* Closes #541

* Update GenericCodeInterface.py

* fixed

* fixed tester (#528)

* ensemble model pb weights for variables coming from functions

* added long to integer options, added unit test coverage

* version control

* testing library versions in RavenUtils

* found consistent library set

* revert utils changes

* patched up file closing for Windows

* moved 2 tests to unordered csv

* remove directory printing for history sets

* removed path from history set CSVs

* added verbosity for crosschecking

* temporarily skipping time warping cluster tests due to Windows failures

* returned outstream main printing

* reducing strictness of user guide forward grid sampling test

* fixed rel err in unordered csv differ

* working out bugs for UnorderedCSVDiffer

* tests passing, had to introduce zero threshold for two basic stats tests

* increased debugging verbosity for debugging linux failures

* faster version of thresholding, think it works for all types

* now with less debug

* fixed nested XML reading (#603)

* finished fit for Spline

* aaa

* Fixes optimizer-runs-raven bug (#610)

* commented out initial setting of point probability to prevent unintended downstream interactions

* added verbosity to potential type failing, and regolded new prefixes (other values did not change)

* added a test

* added second test for spline interpolator

* ok now working on DMD

* Rw data final naming (#614)

* relocated utils, dataobject unit tests and renamed dataobjects

* relocated Files and Distributions unit tests as well

* copied necessary files back to main test dir

* ok

* Alfoa/scale6.2 (#608)

* added the parser

* moving

* ok

* finished interface

* added test + initial documentation

* added documentation for SCALE coupling

* missing regression tests

* addressed Diego's comments

* added test for Combine TRITON and ORIGEN

* added test for combined triton origen + added possibility in CustomSampler to use the functions

* addressed diego comments again

* revert old commit and address Diego's final comments

* type in tests file

* added prereq in testExternalReseed to avoid conflict in parallel test execution

* updated XSD schema

* reset moose

* cleaning up

* Improved UnorderedCSVDiffer speed (#615)

* cleaned up

* cleanup

* checked out dataobject-rework tests file

* adding printing

* ok

* edits

* moving

* PRA plugin manual first edits

* added math utils

* edits

* moving ahead

* edits

* add manual for the DataClassifier in PRAPlugin

* almost done

* edits

* edit

* moving

* add manual for markov categorical distribution

* edit

* ok

* edits

* ok

* added description

* removed CSVs and added documentation

* reverted modificaiton in basic stats

* edits

* edits

* edits

* edits

* removed files

* removed files

* added test for PolyExponential

* missing test for DMD

* added tests for DMD

* addressed Diego's comments

* added <revision> in TestInfo

* regolded PolyExponential tests since I shortened the time series

* added format for printinig

* tolerances

* fixed coeff printing on scree for polyExp Poly

* added minimum scipy version

* remove xml checker for DMD since the eigenvalues are not necessary ordered and consequentially a spurious diff can happen

* type

* added importe of differential_evolution only where is required

* modified tests

* update test for markov distribution

* added comments

* expand install script for conda 4.4 and beyond (#618)

* expand install script for conda 4.4 and beyond

* added explanatory comments

* edits

* change the data classifier to use the new structure of DataObjects

* added missing files

* Multi-sample Variables (vector inputs) (#625)

* Optimizer inherits from Sampler

* first implementation: by default copy value to all entries in vector variable, works

* finished test and implementation of simple repeat-value vector variable sampling

* added InputSpecs for optimizer, tests pass

* got input params working for optimizer

* first implementation: by default copy value to all entries in vector variable, works

* finished test and implementation of simple repeat-value vector variable sampling

* added InputSpecs for optimizer, tests pass

* got input params working for optimizer

* stash

* fixed gradient calculation to include vectors, all non-vector tests passing

* fixed gradient calculation to include vectors, all non-vector tests passing, conditional sizing for vector grad

* boundary condition checking, all passing

* redundant trajectories, all passing

* same coordinate check

* dot product step sizing

* stochastic engine is incorrectly sized; currently each entry in vector is being perturbed identically.  Needs work.

* working on constraints, convergence is really poor, needs more help

* first boundary conditions (internal) working, although type change in precond test

* constraints fully done, only precond has a problem still, vector still not converging well

* debugging difference between all scalars and vector

* vector

* time parabola model

* fixed initial step size

* working, although as a vector is a bit slower than all scalars

* vector is faster than scalar, reduced scale of tests (and better solution)

* all passing, but precond, which is having the type error still

* cleaned up, removed scalar comparison test, fixed precond test

* cleanup

* last bit of cleanup, all tests passing

* stash, it appears customsampler and datasets are not yet compatible

* xsd

* stash, <what> cannot handle specific requests

* reloading from dataset csv works be default

* fixed unit test, vector test

* xsd

* CustomSampler handles Point,History,Data sets

* cleanup

* cleanup

* updated custom sampler description docs

* Optimizer uses Custom sampler with vector variables for initial points

* unnecessarily-tested file

* initial round of review comments

* script for disclaimer adding, also added to models in optimizing test dir

* increased verbosity for test debug

* more verbosity for debugging

* gold standard agrees with all test machines, personal cluster profile (my desktop find minimum in traj 1 of 36 instead of 220ish)

* new golds

* exposed RNG to RAVEN python...swig (#630)

* exposed RNG to RAVEN python...swig

* fixed for now dist stoch enviroment

* added more missing files

* missing files more

* added last file

* edits

* edits

* updated xsd

* fix xsd for Markov categorical distribution

* remove duplicated lines

* remove duplicated lines

* edits

* edits

* edits

* Vector constants (#632)

* shape from node to attribute

* constants can now be vectors too

* necessary Sampler and Optimizer changes

* extracted common constant reading for sampler, optimizer

* including string custom vector vars

* vector constant works in rrr with optimizer

* fix data classifier for HistorySet

* fix typo

* delete trailing whitespace

* edits

* pre-merge review comments addressed: framework/DataObjects (#646)

* pre-merge review comments addressed for modules in framework/DataObjects, with the exception of merging DataObject into DataSet

* removed hierarchal unecessary use of [:]

* remainder of comments addressed

* modified test for dataobject rework

* fixes

* edits after first round of review

* removed useless files

* cleaned files

* removed keyword

* modified docs

* edits

* edits

* edits

* edits

* rm dataFile for MarkovCategorical dist, fix code to handle MAAP5 interface without executable

* fix seeding for markov model

* fix test for data classifier

* resolve part of the comments

* update raven user manual

* capitalize the class name

* update docs

* update docs build

* rename files

* update FT tests

* update ET tests

* update class name

* fix data object

* merge documents for ETImportor PP

* merge documents of DataClassifier and FTImporter PP

* fix markov model with internal RNG class, and regold tests due to merge devel with issue #672

* first round of reseolved comments

* delete whitespaces
  • Loading branch information
mandd authored and alfoa committed Oct 18, 2018
1 parent a96f4be commit faf4cd5
Show file tree
Hide file tree
Showing 177 changed files with 9,210 additions and 460 deletions.
24 changes: 20 additions & 4 deletions developer_tools/XSDSchemas/Distributions.xsd
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,11 @@
<xsd:element name="Binomial" type="BinomialDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="Poisson" type="PoissonDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="Categorical" type="CategoricalDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="MarkovCategorical" type="MarkovCategoricalDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="Custom1D" type="Custom1DDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="NDInverseWeight" type="NDInverseWeightDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="NDCartesianSpline" type="NDCartesianSplineDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="MultivariateNormal" type="MultivariateNormalDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="NDInverseWeight" type="NDInverseWeightDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="NDCartesianSpline" type="NDCartesianSplineDistribution" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="MultivariateNormal" type="MultivariateNormalDistribution" minOccurs="0" maxOccurs="unbounded"/>
</xsd:choice>
</xsd:complexType>
<!-- *********************************************************************** -->
Expand Down Expand Up @@ -198,13 +199,28 @@
</xsd:extension>
</xsd:simpleContent>
</xsd:complexType>
<xsd:complexType name="CategoricalDistribution">

<xsd:complexType name="CategoricalDistribution">
<xsd:sequence>
<xsd:element name="state" type="stateType" minOccurs="1" maxOccurs="200"/>
</xsd:sequence>
<xsd:attribute name="name" type="xsd:string" use="required"/>
<xsd:attribute name="verbosity" type="verbosityAttr" default="all"/>
</xsd:complexType>

<xsd:complexType name="MarkovStateType">
<xsd:attribute name="outcome" type="xsd:decimal" use="required"/>
<xsd:attribute name="index" type="xsd:positiveInteger" use="required"/>
</xsd:complexType>

<xsd:complexType name="MarkovCategoricalDistribution">
<xsd:sequence>
<xsd:element name="transition" type="floatList" minOccurs="1" maxOccurs="1"/>
<xsd:element name="workingDir" type="xsd:string" minOccurs="0" maxOccurs="1"/>
<xsd:element name="state" type="MarkovStateType" minOccurs="1" maxOccurs="200"/>
</xsd:sequence>
<xsd:attribute name="name" type="xsd:string" use="required"/>
</xsd:complexType>

<xsd:complexType name="Custom1DDistribution">
<xsd:all>
Expand Down
66 changes: 66 additions & 0 deletions doc/user_manual/ProbabilityDistributions.tex
Original file line number Diff line number Diff line change
Expand Up @@ -903,6 +903,72 @@ \subsubsection{1-Dimensional Discrete Distributions.}
</Distributions>
\end{lstlisting}

\paragraph{Markov Categorical Distribution}
\label{subsec:markovCategorical}

The \textbf{MarkovCategorical} distribution is a specific discrete categorical distribution describes
a random variable that can have $K$ possible outcomes, based on the steady state probabilities provided by
Markov model.
%
\begin{itemize}
\item \xmlNode{transition}, \xmlDesc{float, optional field}, the transition matrix of given Markov model.
\item \xmlNode{dataFile}, \xmlDesc{string, optional xml node}. The path for the given data file, i.e. the transition matrix.
In this node, the following attribute should be specified:
\begin{itemize}
\item \xmlAttr{fileType}, \xmlDesc{string, optional field}, the type of given data file, default is `csv'.
\end{itemize}
\nb Either \xmlNode{transition} or \xmlNode{dataFile} is required to provide the transition matrix.
\item \xmlNode{workingDir}, \xmlDesc{string, optional field}, the path of working directory
\item \xmlNode{state}, \xmlDesc{required xml node}. The output from this state indicates
the probability for outcome 1.
In this node, the following attribute should be specified:
\begin{itemize}
\item \xmlAttr{outcome}, \xmlDesc{float, required field}, outcome value.
\item \xmlAttr{index}, \xmlDesc{integer, required field}, the index of steady state probabilities corresponding to the transition matrix.
\end{itemize}
\item \xmlNode{state}, \xmlDesc{required xml node}. The output from this state indicates
the probability for outcome 2.
In this node, the following attribute should be specified:
\begin{itemize}
\item \xmlAttr{outcome}, \xmlDesc{float, required field}, outcome value.
\item \xmlAttr{index}, \xmlDesc{integer, required field}, the index of steady state probabilities corresponding to the transition matrix.
\end{itemize}
\item ...
\item \xmlNode{state}, \xmlDesc{required xml node}. The output from this state indicates
the probability for outcome K.
In this node, the following attribute should be specified:
\begin{itemize}
\item \xmlAttr{outcome}, \xmlDesc{float, required field}, outcome value.
\item \xmlAttr{index}, \xmlDesc{integer, required field}, the index of steady state probabilities corresponding to the transition matrix.
\end{itemize}

\end{itemize}

\textbf{Example:}

\begin{lstlisting}[style=XML]
<Simulation>
...
<Distributions>
...
<MarkovCategorical name="x_dist">
<!--dataFile fileType='csv'>transitionFile</dataFile-->
<transition>
-1.1 0.8 0.7
0.8 -1.4 0.2
0.3 0.6 -0.9
</transition>
<state outcome='1' index='1'/>
<state outcome='2' index='2'/>
<state outcome='4' index='3'/>
</MarkovCategorical>
...
</Distributions>
...
</Simulation>
\end{lstlisting}



%%%%%% N-Dimensional Probability distributions
\subsection{N-Dimensional Probability Distributions}
Expand Down
8 changes: 6 additions & 2 deletions doc/user_manual/model.tex
Original file line number Diff line number Diff line change
Expand Up @@ -937,7 +937,7 @@ \section{Models}
%<alias variable='internal_variable_name'>Material|Fuel|thermal_conductivity</alias>
\subsection{Code}
\label{subsec:models_code}
As already mentioned, the \textbf{Code} model represents an external system
The \textbf{Code} model represents an external system
software employing a high fidelity physical model.
%
The link between RAVEN and the driven code is performed at run-time, through
Expand Down Expand Up @@ -971,6 +971,10 @@ \subsection{Code}
\begin{itemize}
\item \xmlNode{executable} \xmlDesc{string, required field} specifies the path
of the executable to be used.

\item \xmlNode{walltime} \xmlDesc{string, optional field} specifies the maximum
allowed run time of the code; if the code running time is greater than the specified
walltime then the code run is stopped. The stopped run is then considered as if it crashed.
%
\nb Either an absolute or relative path can be used.
\item \aliasSystemDescription{Code}
Expand Down Expand Up @@ -1535,7 +1539,7 @@ \subsection{EnsembleModel}
%
The user can specify as many \xmlNode{Output} (s) as needed. The optional \xmlNode{Output}s can be of
both classes ``DataObjects'' and ``Databases''
(e.g. \textit{PointSet}, \textit{HistorySet}, \textit{DataSet}, \textit{HDF5}).
(e.g. \textit{PointSet}, \textit{HistorySet}, \textit{DataSet}, \textit{HDF5})
\nb \textbf{The \xmlNode{Output} (s) here specified MUST be listed in the Step in which the EnsembleModel is used.}
\end{itemize}
%
Expand Down
Loading

0 comments on commit faf4cd5

Please sign in to comment.