Skip to content

Dev: Experiment Sequence

Jake Ross edited this page Jan 14, 2022 · 3 revisions

Initiation

  1. Experiment is launched
    1. User starts the experiment
    2. ExperimentRecovery detects a saved experiment checkpoint and starts the experiment
  2. ExperimentExecutor.execute fires
  3. ExperimentExecutor._pre_execute_check fires
    1. check for persistence
    2. check for no runs
    3. check for email plugin
    4. check for mass spec plugin
    5. check first aliquot (sets the first aliquot and in the process identifies any conflicts, potential requires human interaction)
    6. setup dated repositories
    7. check repositories
    8. check managers
    9. check dashboard
    10. check memory
    11. check automated run monitor
    12. check pyscript runner
    13. check for locked valves (requires human interaction)
    14. set preceding blank (requires human interaction)
  4. If pre_execute_check successful then continue
  5. reset ExperimentExecutor
  6. Span and start the execution thread ExperimentExecutor._execute

Pre Execution

  1. Check the ExperimentScheduler for delayed start (This needs checking it seems redundant)
  2. do delay_before_analyses
  3. For each experiment_queue
    1. check experiment is alive ExperimentExecutor.is_alive()
    2. do ExperimentExecutor._pre_queue_check This currently only checks that the tray is set correctly
    3. do ExperimentExecutor._execute_queue

Execution

  1. reset stats
  2. start the stats timer
  3. trigger START_QUEUE event
  4. reset_conditional_results()
  5. construct a generator to iterate the automated runs
  6. setup a consumable context manager. used to handle run overlapping
  7. Start an infinite loop while 1 to start executing runs

Execution loop

  1. check alive
  2. check if the queue has been modified. if so construct new generator
  3. get next spec (AutomatedRunsSpec) from the generator
  4. continue if spec.skip
  5. check for scheduled stop "True if the end time of the upcoming run is greater than the scheduled stop time"
  6. do ExperimentExecutor._pre_run_check on the spec
    1. check dashboard
    2. check memory
    3. check managers (
    4. check for errors (check for errors reported by the available managers)
    5. check AutomatedRunMonitor
    6. wait for save
  7. determine if this is an overlapping run
  8. if not overlapping:
    1. if alive and cnt<nruns and not is_first_analysis then delay
    2. check alive after delay
  9. construct the AutomatedRun
  10. determine if we should overlap, if overlap
    1. wait for the extracting_run to finish
    2. span a new Thread to execute _do_run
    3. wait for the overlap
    4. add the thread and run to they consumable context manager
  11. if no overlap. ExperimentExecutor._join_run
    1. ExperimentExecutor._do_run
    2. tear down the run

Do Run

Clone this wiki locally