Skip to content

Phase 6b Scope of Work

Clint Daniels edited this page Jun 22, 2021 · 13 revisions

Table of Contents

Task 1: Project Management

...

Deliverable(s):

  • ...

top

Task 2: Vehicle Type Model

...

Deliverable(s):

  • ...

top

Task 3: Software Development Tools

Consultant will propose and implement a prototype systematic multi-platform performance benchmarking approach. The performance benchmarking approach will explicitly support:

  • Testing across multiple machines, including documentation of machine configurations such as RAM, CPU, operating system, and other relevant attributes.
  • Testing across different versions of ActivitySim, including integration with ActivitySim’s public commit history on GitHub in order run compatible benchmarks for older or intermediate versions of the code, to help identify what code changes triggered particular shifts in benchmark performance.

Prototype benchmarks will be developed using a subset of existing Travis tests for the current version of ActivitySim, and it is expected that these benchmarks will not be compatible with ActivitySim code from prior to the current release. In addition to replicating Travis tests, the benchmarking approach to be adopted is planned to be able to support performance benchmarking of any reproducible test, including performance of individual components and submodels, but developing specific benchmarks for individual components or submodels is left for a future task and is not included in this scope of work.

Consultant will enhance the system for distribution and installation of future ActivitySim releases, in consultation with the ActivitySim Project Management Committee and other ActivitySim contributors. Consultant will migrate the default ActivitySim installation to use the conda-forge package repository. Consultant will ensure that every ActivitySim dependency (both required and optional) is already available in conda-forge, or submit to conda-forge any repositories not currently in that system (e.g. openmatrix). Consultant will provide assistance to the ActivitySim Project Management Committee with respect to making any additional ActivitySim releases, but will not be the party primarily responsible for managing ActivitySim releases. Other than ensuring compatibility with conda-forge, responsibility for managing general ActivitySim releases is not included in this task.

Consultant will submit a pull request to remove the “orca.py” file from the ActivitySim repository’s master branch, and edit all other files in the repository to use “orca.py” as a regular library dependency from conda-forge, instead of the previously integrated version of that file.

Deliverables:

  • Email and online support to contributor and agency user and pull requests
  • Updated source code as necessary
  • Updated documentation as necessary
  • Implementation of a prototype benchmarking suite, and support for agency user installations and benchmark runs.
  • Updated ActivitySim distribution system
  • Pull request to remove “orca.py” from ActivitySim

top

Task 4: Visualization

Consultant will work with the ActivitySim partners to develop a long-term roadmap documenting the vision, purpose, and priorities for visualization tools within the ActivitySim product. Consultant will document ActivitySim stakeholders’ purposes and needs for visualization tool, focusing on concrete visualization use cases within the partner agencies. Consultant will identify a common set of workflows and visualization needs that meets the needs of the broadest number of existing, and potential future, stakeholders. This long-term road map will identify specific tasks that can be executed sequentially or in parallel to develop a complete suite of visualization tools, and will define priority and level of effort for each item Consultant will scan the market for potential visualization frameworks that address the identified requirements including resources already deployed within the partner agencies. This scan will include both open-source tools, as well as proprietary technologies, and will review the identified solutions for fitness to purpose, extensibility, and implementation. Consultant will work with the ActivitySim partner agencies to recommend a preferred framework or solution.

Informed by the identified preferred solution, Consultant will assess ActivitySim’s existing data pipeline architecture and identify revisions and modifications required to support users’ ability to organize, run, store, analyze, and visualize the ActivitySim outputs in an automated way, considering the different contexts in which model data is used, such as model debugging, model calibration/validation, project analysis and scenario comparisons. Consultant will implement any necessary data pipeline tasks (e.g., data naming conventions, data schemas) to support automated visualization covered under this scope of work.

Consultant will implement an initial version of a visualization platform that can be used to support detailed model development, debugging/calibration/validation, and results presentation. At a minimum this platform will present the user with a way to organize and access their model run outputs - show a “topsheet” of essential statistics, charts, and summaries that provide a run overview. Consultant will test, iterate, document, and deploy the tool as part of the ActivitySim repository, and will identify a small subset of ActivitySim partners to do the initial iterations and then roll out to the larger consortium as the feature set gets more compelling.

top

Deliverable(s):

  • GitHub Wiki: Agency visualization requirements
  • GitHub Wiki: Roadmap for model data pipeline improvements and data visualization development
  • Pull Request: Code reflecting data pipeline improvements to support visualization tool and documentation
  • Pull Request: Visualization tool, deployed for an existing ActivitySim implementation

top

Clone this wiki locally