-
Notifications
You must be signed in to change notification settings - Fork 26
Dev: Experiment Sequence
Jake Ross edited this page Jan 14, 2022
·
3 revisions
- Experiment is launched
- User starts the experiment
-
ExperimentRecovery
detects a saved experiment checkpoint and starts the experiment
-
ExperimentExecutor.execute
fires -
ExperimentExecutor._pre_execute_check
fires- check for persistence
- check for no runs
- check for email plugin
- check for mass spec plugin
- check first aliquot (sets the first aliquot and in the process identifies any conflicts, potential requires human interaction)
- setup dated repositories
- check repositories
- check managers
- check dashboard
- check memory
- check automated run monitor
- check pyscript runner
- check for locked valves (requires human interaction)
- set preceding blank (requires human interaction)
- If
pre_execute_check
successful then continue - reset
ExperimentExecutor
- Span and start the execution thread
ExperimentExecutor._execute
- Check the
ExperimentScheduler
for delayed start (This needs checking it seems redundant) - do
delay_before_analyses
- For each
experiment_queue
- check experiment is alive
ExperimentExecutor.is_alive()
- do
ExperimentExecutor._pre_queue_check
This currently only checks that the tray is set correctly - do
ExperimentExecutor._execute_queue
- check experiment is alive
- reset stats
- start the stats timer
- trigger
START_QUEUE
event - reset_conditional_results()
- construct a generator to iterate the automated runs
- setup a
consumable
context manager. used to handle run overlapping - Start an infinite loop
while 1
to start executing runs
- check alive
- check if the queue has been modified. if so construct new generator
- get next spec (
AutomatedRunsSpec
) from the generator - continue
if spec.skip
- check for scheduled stop
"True if the end time of the upcoming run is greater than the scheduled stop time"
- do
ExperimentExecutor._pre_run_check
on the spec- check dashboard
- check memory
- check managers (
- check for errors (check for errors reported by the available
managers
) - check
AutomatedRunMonitor
- wait for save
- determine if this is an overlapping run
-
if not overlapping:
- if alive and
cnt<nruns
andnot is_first_analysis
then delay - check alive after delay
- if alive and
- construct the
AutomatedRun
- determine if we should overlap, if overlap
- wait for the
extracting_run
to finish - span a new Thread to execute
_do_run
- wait for the overlap
- add the thread and run to they
consumable
context manager
- wait for the
- if no overlap.
ExperimentExecutor._join_run
ExperimentExecutor._do_run
- tear down the run