Releases: vmware/weathervane
Releases · vmware/weathervane
Release 2.2.0
Release 2.1.2
- Added a command line option to buildDockerImages.pl to not delete docker images (--deleteImages false), default is true.
- Use selector in DeleteAllForCluster to solve issue #251.
- Improve DB load estimates.
- Update the location of Cassandra repo.
- Additional usability and bug fixes.
Release 2.1.1
- Added a new configuration size called small3 based on small2-applimit2 with a few tunings to kubernetes requests/limits -- intended to replace small2 and small2-applimit2, which are now deprecated.
- Implemented better cleanup after building images.
- Added collection of k8s events log to runs and made some improvements to the workload output.
- Separated proxy variable into two for easier configuration: http_proxy and https_proxy.
- Fix a bug with sync stopping of multiple workloads.
- Additional usability and bug fixes.
Release 2.1.0
Weathervane 2.1.0 includes the following changes:
- Added parameters that affect Kubernetes pod placement with affinity and anti-affinity rules. The default values for these parameters change the behavior of Weathervane from version 2.0. Due to this, results should not be compared between 2.0 and 2.1.
- Added a new configuration size called small2-applimit2. This configuration uses whole number values for CPU limits and is appropriate for use with the Kubernetes CPU manager.
- Enhanced workload driver inter-node communication to improve scaling.
- Improved data loading, resulting in faster loads and greatly reducing unintended data corruption between runs.
- Added a number of specialized configuration options including
prepareConcurrency
andabortFailingWorkload
. These are discussed in the User's Guide. - Added CI/CD automation
- Many minor bug fixes and improvements.
Release 2.0.9
Changes in this release:
- Fixed a bug in the runWeathervane.pl script that lead to an error with using a scalar instead of a list.
- Updated runWeathervane.pl so that the dockerNamespace can be overridden on the command-line
- Weathervane 2.0 can now run in containers directly on Docker hosts without the use of Kubernetes. This use-case is not yet covered in the User's Guide, so if you want to use this feature please contact the Weathervane team.
- The runWeathervane.pl and buidDockerImages.pl script now properly use non-zero exit codes for error conditions. This is primarily useful for automation-related uses.
Release 2.0.8
Changes in this release:
- Added the interval runStrategy, which allows the load to be varied over the course of a run by specifying a userLoadPath. This runStrategy is primarily useful for experimentation and demos as it currently does not provide an overall performance metric.
- Added the ability to have runs with the fixed or interval runStrategy run continuously without ending. This is useful for burn-in testing or demo purposes.
- Fixed an issue where the application pods were starting before the datamanager pod had terminated. This could lead to failedScheduling errors on resource-constrained clusters.
- Made collecting kubectl top data optional since it is not supported by default on many clusters.
- Updated the documentation for the new features.
- Added an Advanced Topic to the docs covering how to access the browser interface of the running application.
Release 2.0.7
Changes in this release:
- Added a parameter (--proxy) to the buildDockerImages.pl script that allows the user to specify an http(s) proxy to be used when fetching artifacts for the build.
- Added a check to runWeathervane.pl to ensure that Weathervane can provision persistent volumes in the defined storage classes before starting a run. This check can be skipped by using the --skipPvtest parameter.
- Made a change to the configuration of the cache disk for the nginx pods to ensure enough headroom for filesystem overhead. This change does not affect performance.
- Minor doc updates to clarify prerequisites
- Fixed the (undocumented) interval run strategy. Documentation and support for this feature is planned for an upcoming release.
Release 2.0.6
Changes in this release:
- Fixes a bug introduced in the previous release that caused spurious error messages about port 80 on the nginx service.
Release 2.0.5
Changes in this release:
- Added a new options for the
appIngressMethod
parameter callednodeport-internal
. This uses NodePort services for ingress from the workload drivers to the applications, but uses the Internal IP addresses of the nodes. This differs from thenodeport
option which uses the External IP addresses of the nodes. Note that in your drivers and applications are running in different clusters, then this option will only work if the nodes' internal addresses are routable from the driver cluster. - Increased the timeouts on the readiness probes for most services to avoid occasional non-ready status when the nodes are actually OK.
- Changed the Service type for RabbitMQ from NodePort to ClusterIP. The use of NodePort was a bug that might cause an error on some clusters.
- Remove port 80 from the Nginx Service. This port was not used, and it inclusion could use up load-balancer resources in some clusters.
Release 2.0.4
Changes in this release:
- Added a new xsmall configuration size. This configuration size is more performant than micro for the amount of resources it requires, but the pods pack well into 2 CPU, 8GiB nodes.
- Replaced the small configuration size with a new small2 configuration. The small2 configuration uses less memory for similar performance. Note that the original small configuration will remain available, but its use is deprecated.
- Changed the way that the run harness interacts with the workload driver's controller and stats service so that no externally accessible service is needed for the workload driver cluster. This allows the workload driver to run in a cluster that doesn't support loadBalancer services when the SUT cluster is using loadBalancer services.
- Many bug fixes and usability enhancements, including:
- Improvements to pod cleanup with custom namespaces.
- Improved end-of-run deletion of kubernetes constructs created by Weathervane
- The Java applications will retry connections to RabbitMQ in the case of missed heartbeats.
- Fixed a bug that cause the data loading process to use the current context from the default Kubeconfig file, rather than the context specified in the Weathervane configuration file.
- Fixed an issue where the run harness could hang when issuing an HTTP Get to the workload controller, resulting in a hung run.