-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rw correlated arma #650
Rw correlated arma #650
Conversation
* convert metric pp to use new data objects * clean up
* edits * edits * comments review-1 * edits * asDictionary works * added full loop cycle test
…into alfoa/dataobject-rework_poly_sum_exponential
…into alfoa/dataobject-rework_poly_sum_exponential
* adaptive sparse grid, sobol working * Adaptive sampling, plus Dummy-based rlz updates from input, output * cleanup * added cluster labeling to outstream print, will be tested with optimizer * optimizers working, but need to fix reading from CSV before merging * stash * fixed prefix as numpy array * loading CSV correctly now, so optimizer can find optimal solns * cleanup * now handling infs and nans * cleanup * added the option to avoid to reprint the files in case their content did not change * reverted 101 tests modified by the conversion scripts (by mistake) * reverted other tests * reverted tests user guide * reverted all the other tests * Update beale.xml removed ```"``` * removed " * removed abstract method that has been removed from the XDataSet * fixed with respect to the new prefix strategy * fixed loading of dataset in case no metadata are found in the xml (finally statement in a try except is always executed) * fixed type in ROM causing a failure * fixed another typo that was making the Dummy.py model to fail * removed whitespace that was added from another commit * updated to use pd.isnull instead of np.isnan in unordered CSV differ * test files for differ
…//github.com/idaholab/raven into alfoa/dataobject-rework_poly_sum_exponential
* fix typo in rom * modify tests * convert cross validation pp * convert tests for cross validation pp * clean up * keep the options to dump the cross validation results for each fold * update conversion scripts for cross validation
* fixed history set reading from CSV if no XML present, or if present for that matter * fixed up PointSet and DataSet versions, too.
* typing is now working, but if more samples are taken after asDataset is called, and asDataset is called again, integers will be converted into floats. I submitted a stackOverflow question about this, and we will see. * fixed integer preservation using concat instead of merge
* added hierarchical methods * fixed multiline in databases * added sanity assertion * stash * mechanics in place for hierarchical * hierarchal working with collector OR data * a little more testing
* stash * stash * stash * topologicals working * cleanup * review comments
* convert importance rank pp to use the new data objects * fix comments
…olab#469) * fixed return code * starting reworking database * ok * ok * ok * ok * ok * moving ahead * speeded up group search and allgroup list * fixed subgroup * almost done * finished h5py_interface * working on IOStep * dfloat * ok finished database * finished not hirearchical HDF5 and fixed static ROMs * fixed time dep ROMs * addressed Paul's comments * fixed plot * fixed other 2 tests * fixed other 3 tests * regolded another test * initial edits * fixed write...but the history set does not work yet * fixed return code * Update README.md (idaholab#439) just removed few sentences * unit tests passing again * slicing now works * added abstract methods to object base * starting reworking database * histories working: * switched to comprehension * fname to fName * removing variables works * added addReal check * Correct setting of default pivots for history set (idaholab#448) * fixed default pivot param setting to be in order * put 'time' back in since we handle that now * ok * ok * ok * ok * edits * ok * moving ahead * speeded up group search and allgroup list * fixed subgroup * progressive CSV writing with appends instead of rewriting each iteration for multiruns * cleanup * almost done * finished h5py_interface * working on IOStep * dfloat * ok finished database * finished not hirearchical HDF5 and fixed static ROMs * fixed time dep ROMs * check data alignment works (idaholab#452) * fixed no-scalar bug, albeit in a non-traditional way. (idaholab#455) * addressed Paul's comments * pylint problems addressed (idaholab#451) * pylint problems addressed * Update TestXPointSet.py * malformed comment blocks in 2 tests * new unordered CSV differ, tested on test_Lorentz * convert code to use new data objects (idaholab#457) * initial convert of code for new data object * fix bugs in dataset and continue convert code for accepting the new data object * additional converting * clean up * Sampler Restarts (idaholab#458) * started work * sampler restarts working * Adaptive Samplers (plus Dummy IO fix) (idaholab#459) * adaptive sparse grid, sobol working * Adaptive sampling, plus Dummy-based rlz updates from input, output * cleanup * fixed prefix as numpy array * convert basic statistics pp to use the new data object (idaholab#460) * convert basicStatistics to use the new data objects * convert tests of basicStatisitics * convert more tests * clean up * move addMetaKeys to localInputAndChecks * resolve comments * fix checkIndexAlignment in DataSet * add unit test for checkIndexAlignment * Closes idaholab#464 * added test * convert metric pp to use new data objects (idaholab#462) * convert metric pp to use new data objects * clean up * fixed the tests * added the new gold file for the new test * Mods for new dataObject (idaholab#463) * edits * edits * comments review-1 * edits * asDictionary works * added full loop cycle test * fixed plot * fixed other 2 tests * fixed other 3 tests * Update TestXDataSet.py * Update Dummy.py * other tests converted * readded deleted file * almost done * ok * fixed tests * Fixed LimitSurface Postprocessors, added conversion script, added documentation * ok * adapted LimitSurfaceSearch sampler and moved directory for more clearity * almost finished safest point * ok * fixed safest point PP * fixed adaptive batch * modified documentation for SafestPoint * removed Outputhold for Optimizer since is not used and not working * fixed metadata fro samples * removed all * modified unit test * addressed Diego's comments * convert tests (idaholab#480) * removed commented part in user manual * addrresed Conjiang's * remove .DS_Store * convert external post processor to use new DataObjects (idaholab#479) * convert external pp to use the new data objects * regold tests * fix the DataObjects * address comments * starting reworking database * ok * ok * finished h5py_interface * working on IOStep * dfloat * ok finished database * finished not hirearchical HDF5 and fixed static ROMs * addressed Paul's comments * fixed plot * fixed other 2 tests * fixed other 3 tests * regolded another test * unit tests passing again * slicing now works * starting reworking database * histories working: * removing variables works * ok * ok * progressive CSV writing with appends instead of rewriting each iteration for multiruns * cleanup * finished h5py_interface * working on IOStep * dfloat * ok finished database * finished not hirearchical HDF5 and fixed static ROMs * check data alignment works (idaholab#452) * fixed no-scalar bug, albeit in a non-traditional way. (idaholab#455) * pylint problems addressed (idaholab#451) * pylint problems addressed * Update TestXPointSet.py * malformed comment blocks in 2 tests * new unordered CSV differ, tested on test_Lorentz * Adaptive Samplers (plus Dummy IO fix) (idaholab#459) * adaptive sparse grid, sobol working * Adaptive sampling, plus Dummy-based rlz updates from input, output * cleanup * fixed prefix as numpy array * convert basic statistics pp to use the new data object (idaholab#460) * convert basicStatistics to use the new data objects * convert tests of basicStatisitics * convert more tests * clean up * move addMetaKeys to localInputAndChecks * resolve comments * fix checkIndexAlignment in DataSet * add unit test for checkIndexAlignment * Closes idaholab#464 * convert metric pp to use new data objects (idaholab#462) * convert metric pp to use new data objects * clean up * fixed the tests * fixed other 2 tests * other tests converted * readded deleted file * almost done * ok * fixed tests * Fixed LimitSurface Postprocessors, added conversion script, added documentation * ok * adapted LimitSurfaceSearch sampler and moved directory for more clearity * almost finished safest point * ok * fixed safest point PP * fixed adaptive batch * modified documentation for SafestPoint * removed Outputhold for Optimizer since is not used and not working * fixed metadata fro samples * addressed Diego's comments * removed commented part in user manual * addrresed Conjiang's * remove .DS_Store * convert external post processor to use new DataObjects (idaholab#479) * convert external pp to use the new data objects * regold tests * fix the DataObjects * address comments
* merged Diego's commits for InterfacePostProcessor * Fixed OutStreams printing and regolder foulty ROMs results * modified ARMA tests * fixed reading from CSV of HistorySet (correctly considering the pivotParameter) * fixed * TEMPORARY fix of printing of HISTORY SET * fixed ROM, ARMA, DATABASE (loader), Multiple usage of database * fixed multi target rom and rom trainer test * added the possibility to ask for unfolded XArrays in the realization(...) method * ok * modified other two tests * addressed Paul and Congjiang's comments (no interface PP related) * removed comment that does no apply anymore in HistorySet * fixed another test database * addressed comments * addressed comments 2 * edits * fixed test file
Job Test mac on 2cc6653 : invalidated by @PaulTalbot-INL unusual build error |
Due to changes needed referenced in #597, the VARMA tests only the creation of the outputs; the difference even between statistics on two different operating system VARMAs are significant, and need to be controlled through independent RNG. UPDATE: Controlling the measurement and state shocks, we can check the statistics but not the individual samples, which is better than just outputs. |
…arma noise generation I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minimal changes requested
framework/Distributions.py
Outdated
""" | ||
Function to get random numbers | ||
@ In, args, dict, args | ||
@ In, size, int, number of entries to return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ In, size, int, optional, number of entries to return. If None, only a single random number is returned
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
rvsValue = self.ppf(random()) | ||
else: | ||
rvsValue = [self.rvs() for _ in range(args[0])] | ||
# TODO to speed up, do this on the C side instead of in python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you remind me this, I will add this in the next crow touch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this; I'll keep it in mind.
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
""" | ||
Created on May 8, 2018 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Paul,
Can you report here that these files comes from the split of the original SupervisedLearning.py module and author and date (if retrievable) of the initial implementation? ( just to keep track of the "maturity" of the "ROM")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
few changes...not big deal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good to go
For Change Control Board: Change Request ReviewThe following review must be completed by an authorized member of the Change Control Board.
|
Checklist passed... Issue closure checklist passed... Good to go. Merging... |
This PR can wait until the rework branch is merged into devel, assuming that doesn't take too long.
Tests for all of the above.
Closes #703.