You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Exception in thread XXX++1:
Traceback (most recent call last):
File "/home/XXX/.conda/envs/raven_libraries/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/XXX/.conda/envs/raven_libraries/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/XXX/project_TREAT/raven/framework/Runners/SharedMemoryRunner.py", line 136, in <lambda>
self.thread = InterruptibleThread(target = lambda q, *arg : q.append(self.functionToRun(*arg)),
File "/home/XXX/project_TREAT/raven/framework/Models/Code.py", line 479, in evaluateSample
inputFiles = self.createNewInput(myInput, samplerType, **kwargs)
File "/home/XXX/project_TREAT/raven/framework/Models/Code.py", line 383, in createNewInput
newInput = self.code.createNewInput(newInputSet,self.oriInputFiles,samplerType,**copy.deepcopy(kwargs))
File "/home/XXX/project_TREAT/raven/framework/CodeInterfaces/RAVEN/RAVENInterface.py", line 253, in createNewInput
raise IOError(self.printTag+' ERROR: The nodefile "'+str(nodeFileToUse)+'" does not exist!')
OSError: RAVEN INTERFACE ERROR: The nodefile "/home/XXX/./node_0" does not exist!
@PaulTalbot-INL It looks like it is in the modifyInfo of the MPI mode (when the new batch size is created and a new node file is generated...it looks like that the information about the NodeParameter is not returned back
Issue Description
Putting the batchsize = 1 in the parallel description leads to an execution error ("node" files are not created).
Input
Error
while the following completes without error
For Change Control Board: Issue Review
This review should occur before any development is performed as a response to this issue.
For Change Control Board: Issue Closure
This review should occur when the issue is imminently going to be closed.
The text was updated successfully, but these errors were encountered: