Skip to content

Releases: derrickoswald/CIMApplication

OneOfN-2.11-2.3.1-2.4.0

07 Nov 08:42
Compare
Choose a tag to compare
Pre-release

Per notes in NE-285 this version of OneOfN removes equipment inside substations:

To remove the unwanted Substation elements from the GridLAB-D file:

  • revert the code to the previous version (roll back changes to the CIMTopologyProcessor)
  • after reading in the CIM file:
    • determine feeders as medium voltage (1e3 < V < 50e3) Connector in substations as PSRType == "PSRType_Substation"
    • create an RDD of external ACLineSegment as PSRType == "PSRType_Underground" || PSRType == "PSRType_Overhead"
    • create an RDD of elements in substations as PSRType == "PSRType_Substation" || (PSRType == "PSRType_Unknown" && EquipmentContainer != null)
    • create an RDD of EquipmentContainer id values for these elements
    • delete all CIM elements where EquipmentContainer is in that RDD but excluding the feeder objects from the first step and cables from the second step
  • execute the CIMNetworkTopologyProcessor function to create TopologicalNode and TopologicalIsland
  • proceed as before to extract the feeder GridLAB-D models - but now with the reduced CIM
OneOfN 2.11-2.3.1-2.4.0
Usage: OneOfN [options] [<CIM> <CIM> ...]

Creates GridLAB-D .glm models for all medium voltage (N5 network) feeder service areas for one-of-N analysis.

  --help                   prints this usage text
  --version                Scala: 2.11, Spark: 2.3.1, OneOfN: 2.4.0
  --quiet                  suppress informational messages [false]
  --master MASTER_URL      local[*], spark://host:port, mesos://host:port, yarn []
  --opts k1=v1,k2=v2       Spark options [spark.graphx.pregel.checkpointInterval=8,spark.serializer=org.apache.spark.serializer.KryoSerializer]
  --storage_level <value>  storage level for RDD serialization [MEMORY_AND_DISK_SER]
  --deduplicate            de-duplicate input (striped) files [false]
  --three                  use three phase computations [false]
  --tbase <value>          temperature assumed in CIM file (°C) [20.0000]
  --temp <value>           low temperature for maximum fault (°C) [20.0000]
  --logging <value>        log level, one of ALL,DEBUG,ERROR,FATAL,INFO,OFF,TRACE,WARN [OFF]
  --checkpoint <dir>       checkpoint directory on HDFS, e.g. hdfs://... []
  --workdir <dir>          shared directory (HDFS or NFS share) with scheme (hdfs:// or file:/) for work files []
  <CIM> <CIM> ...          CIM rdf files to process


$ spark-submit --master spark://sandbox:7077 --conf spark.driver.memory=2g --conf spark.executor.memory=2g /opt/code/OneOfN-2.11-2.3.1-2.4.0-jar-with-dependencies.jar --logging INFO --checkpoint hdfs://sandbox:8020/checkpoint hdfs://sandbox:8020/bkw_cim_export_azimi_with_topology.rdf
$ hdfs dfs -get -p /simulation
$ chmod --recursive 775 simulation
$ cd simulation
$ for filename in STA*; do echo $filename; pushd $filename/input_data > /dev/null; ./gen; cd ..; gridlabd $filename; popd > /dev/null; done;

GridLAB-D-2.11-2.1.2-2.3.9

13 Feb 09:15
Compare
Choose a tag to compare
Pre-release

Minor update:

  • Remove 'lazy' declarations causing NullPointerExceptions when logging error messages.
  • set Spark version 2.1.2

GridLAB-D-2.11-2.2.0-2.3.9

12 Feb 09:50
Compare
Choose a tag to compare
Pre-release

Minor update:

  • Remove 'lazy' declarations causing NullPointerExceptions when logging error messages.

GridLAB-D-2.11-2.2.0-2.3.8

30 Jan 12:44
Compare
Choose a tag to compare

Same as GridLAB-D-2.11-2.1.2-2.3.8 release except:

  • built against Spark 2.2.0
  • uses CIMReader 2.11-2.2.0-2.7.0, the latest release supporting Spark 2.2.0

GridLAB-D-2.11-2.1.2-2.3.8

25 Jan 09:43
Compare
Choose a tag to compare

Maintenance release for production.

From GridLAB-D-2.11-2.1.2-2.3.6 cherry-pick:

  • NE-258: five minute interval for glm recorders (which was mistakenly not included in 2.1.2-2.3.6 tag)
  • NE-260 Power at the Transformer not correctly compared

Update to version 2.3.8 to avoid conflict with master branch.

GridLAB-D-2.11-2.1.2-2.3.6

30 Nov 13:11
Compare
Choose a tag to compare

Maintenance release.

  • change to Spark version 2.1.2
  • upgrade to CIMReader-2.11-2.1.2-2.6.0
  • NE-258: five minute interval for glm recorders

Note the change to version numbering schema: scala_version-spark_version-cimapplication_version similar to CIMReader to be able to release for multiple Spark versions, so 2.11-2.1.2-2.3.6 means Scala 2.11.x, Spark 2.1.2, CIMApplication 2.3.6

CIMApplication-2.11-2.2.0-2.3.5

23 Nov 15:49
Compare
Choose a tag to compare

This release provides a Docker image that allows using an external Spark cluster.
This is currently only tested with the sandbox cluster found in CIMReader.

To start the images, add the Spark master host (sandbox) to the start-docker command:

cd $code/CIMReader/src/test/resources
docker-compose up --scale worker=2&
cd $code/CIMApplication
./start-docker sandbox

Major changes:

  • change version numbering to scala_version-spark_version-cimapplication_version similar to CIMReader to be able to release for multiple Spark versions, so 2.11-2.2.0-2.3.5 means Scala 2.11.x, Spark 2.2.0, CIMApplication 2.3.5
  • upgrade to CIMReader-2-11-2.2.0-2.5.0
  • added themes and legends to the map view; out-of-the-box it includes basic, voltage level, topological island, and service status themes
  • added support for CGMES sample files (requires about processing and normalization using the CIMReader and un-checking the 0-based point sequence View option)
  • added the ability to load CSV files into Spark directly, there are many Load options for separator, header, etc. - and most importantly timestamp formats for time-series ingestion
  • added the ability to save an SQL query as a View, which can be used in subsequent queries
  • added Cassandra
    • the spark-docker Docker image has Cassandra installed and running
    • the cimapplication Docker image is based on the spark-docker image, Cassandra is installed but not running - just used for cqlsh: to create the initial schema and for command line access
    • the query tab has been extended to be able to query Cassandra and save (a very carefully crafted Spark query) to the measured_value_by_day Cassandra table

CIMApplication-2.3.4

01 Nov 19:36
Compare
Choose a tag to compare

First release of a Docker image with a CIMApplication server (TomEE).
To import the image, use:

docker load --input cimapplication-2.3.4

See scripts start-docker and shell-docker for ideas on how to execute the image.
Browse to http://localhost:/cimweb/ to begin.

Major changes:

  • completely revised the CIMApplication GUI:
    • added many RESTful services
    • JavaScript improvements:
      • Mapbox 0.40.1
      • improved symbology
      • trace from Terminal
      • trace user specified depth
      • only show parsed attributes
  • stability improvements to CIMConnector
  • substantial refactoring of GridLAB-D:
    • unbundling players and recorders from base GLMGenerator
    • introduce TransformerSet for ganged transformers
    • refactored SWING node handling
    • MediumVoltage and Export bug fixes
  • upgrade to CIMReader-2-11-2.2.0-2.4.0

GridLAB-D-2.3.2

21 Sep 09:40
Compare
Choose a tag to compare

Comment out optional network topology processing - i.e. make it mandatory, to fix NullPointerException

GridLAB-D-2.3.1

14 Sep 08:04
Compare
Choose a tag to compare

Minor changes:

  • Fix for NE-242 Fuses with Status: Open appear on Gridlabd as Status: Close
  • switch back to Spark version 2.1.1 for this release.
  • fix bug injected by fix for NE-242
  • conditionally do topological processing based on existence of TopologyNode objects