diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 000000000000..73feceef13e1 --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,78 @@ + + +## What does this PR do? + + + +## Why is it important? + + + +## Checklist + + + +- [ ] My code follows the style guidelines of this project +- [ ] I have commented my code, particularly in hard-to-understand areas +- [ ] I have made corresponding changes to the documentation +- [ ] I have made corresponding change to the default configuration files +- [ ] I have added tests that prove my fix is effective or that my feature works + +## Author's Checklist + + +- [ ] + +## How to test this PR locally + + + +## Related issues + + +- + +## Use cases + + + +## Screenshots + + + +## Logs + + diff --git a/CHANGELOG-developer.asciidoc b/CHANGELOG-developer.asciidoc index 607a20c01463..15efed8ed5df 100644 --- a/CHANGELOG-developer.asciidoc +++ b/CHANGELOG-developer.asciidoc @@ -12,6 +12,79 @@ other Beats should be migrated. Note: This changelog was only started after the 6.3 release. +=== Beats version 7.5.1 +https://github.com/elastic/beats/compare/v7.5.0..v7.5.1[Check the HEAD diff] + +=== Beats version 7.5.0 +https://github.com/elastic/beats/compare/v7.4.1..v7.5.0[Check the HEAD diff] + +==== Breaking changes + +- Build docker and kubernetes features only on supported platforms. {pull}13509[13509] +- Need to register new processors to be used in the JS processor in their `init` functions. {pull}13509[13509] + +==== Added + +- Compare event by event in `testadata` framework to avoid sorting problems {pull}13747[13747] + +=== Beats version 7.4.1 +https://github.com/elastic/beats/compare/v7.4.0..v7.4.1[Check the HEAD diff] + +=== Beats version 7.4.0 +https://github.com/elastic/beats/compare/v7.3.1..v7.4.0[Check the HEAD diff] + +==== Breaking changes + +- For "metricbeat style" generated custom beats, the mage target `GoTestIntegration` has changed to `GoIntegTest` and `GoTestUnit` has changed to `GoUnitTest`. {pull}13341[13341] + +==== Added + +- Add ClientFactory to TCP input source to add SplitFunc/NetworkFuncs per client. {pull}8543[8543] +- Introduce beat.OutputChooses publisher mode. {pull}12996[12996] +- Ensure that beat.Processor, beat.ProcessorList, and processors.ProcessorList are compatible and can be composed more easily. {pull}12996[12996] +- Add support to close beat.Client via beat.CloseRef (a subset of context.Context). {pull}13031[13031] +- Add checks for types and formats used in fields definitions in `fields.yml` files. {pull}13188[13188] +- Makefile included in generator copies files from beats repository using `git archive` instead of cp. {pull}13193[13193] + +=== Beats version 7.3.2 +https://github.com/elastic/beats/compare/v7.3.1..v7.3.2[Check the HEAD diff] + +=== Beats version 7.3.1 +https://github.com/elastic/beats/compare/v7.3.0..v7.3.1[Check the HEAD diff] + +=== Beats version 7.3.0 +https://github.com/elastic/beats/compare/v7.2.1..v7.3.0[Check the HEAD diff] + +==== Added + +- Add new option `IgnoreAllErrors` to `libbeat.common.schema` for skipping fields that failed while converting. {pull}12089[12089] + +=== Beats version 7.2.1 +https://github.com/elastic/beats/compare/v7.2.0..v7.2.1[Check the HEAD diff] + +=== Beats version 7.2.0 +https://github.com/elastic/beats/compare/v7.1.1..v7.2.0[Check the HEAD diff] + +==== Breaking changes + +- Move Fields from package libbeat/common to libbeat/mapping. {pull}11198[11198] + +==== Added + +- Metricset generator generates beta modules by default now. {pull}10657[10657] +- The `beat.Event` accessor methods now support `@metadata` keys. {pull}10761[10761] +- Assertion for documented fields in tests fails if any of the fields in the tested event is documented as an alias. {pull}10921[10921] +- Support for Logger in the Metricset base instance. {pull}11106[11106] +- Filebeat modules can now use ingest pipelines in YAML format. {pull}11209[11209] +- Prometheus helper for metricbeat contains now `Namespace` field for `prometheus.MetricsMappings` {pull}11424[11424] +- Update Jinja2 version to 2.10.1. {pull}11817[11817] +- Reduce idxmgmt.Supporter interface and rework export commands to reuse logic. {pull}11777[11777],{pull}12065[12065],{pull}12067[12067],{pull}12160[12160] +- Update urllib3 version to 1.24.2 {pull}11930[11930] +- Add libbeat/common/cleanup package. {pull}12134[12134] +- Only Load minimal template if no fields are provided. {pull}12103[12103] +- Add new option `IgnoreAllErrors` to `libbeat.common.schema` for skipping fields that failed while converting. {pull}12089[12089] +- Deprecate setup cmds for `template` and `ilm-policy`. Add new setup cmd for `index-management`. {pull}12132[12132] + === Beats version 7.1.1 https://github.com/elastic/beats/compare/v7.1.0..v7.1.1[Check the HEAD diff] diff --git a/CHANGELOG-developer.next.asciidoc b/CHANGELOG-developer.next.asciidoc index db5cf33c5d9b..feaa59c5decd 100644 --- a/CHANGELOG-developer.next.asciidoc +++ b/CHANGELOG-developer.next.asciidoc @@ -64,3 +64,4 @@ The list below covers the major changes between 7.0.0-rc2 and master only. - Added a `default_field` option to fields in fields.yml to offer a way to exclude fields from the default_field list. {issue}14262[14262] {pull}14341[14341] - `supported-versions.yml` can be used in metricbeat python system tests to obtain the build args for docker compose builds. {pull}14520[14520] - Fix dropped errors in the tests for the metricbeat Azure module. {pull}13773[13773] +- New mage target for Functionbeat: generate pkg folder to make manager easier. {pull}15580[15880] diff --git a/CHANGELOG.asciidoc b/CHANGELOG.asciidoc index 6035ec84d14a..5adbffbdee2e 100644 --- a/CHANGELOG.asciidoc +++ b/CHANGELOG.asciidoc @@ -3,6 +3,890 @@ :issue: https://github.com/elastic/beats/issues/ :pull: https://github.com/elastic/beats/pull/ +[[release-notes-7.5.1]] +=== Beats version 7.5.1 +https://github.com/elastic/beats/compare/v7.5.0...v7.5.1[View commits] + +==== Bugfixes + +*Affecting all Beats* + +- Fix `proxy_url` option in Elasticsearch output. {pull}14950[14950] +- Fix bug with potential concurrent reads and writes from event.Meta map by Kafka output. {issue}14542[14542] {pull}14568[14568] + +*Filebeat* + +- Change iis url path grok pattern from URIPATH to NOTSPACE. {issue}12710[12710] {pull}13225[13225] {issue}7951[7951] {pull}13378[13378] {pull}14754[14754] +- Fix azure filesets test files. {issue}14185[14185] {pull}14235[14235] +- Update Logstash module's Grok patterns to support Logstash 7.4 logs. {pull}14743[14743] + +*Metricbeat* + +- Fix perfmon expanding counter path/adding counter to query when OS language is not english. {issue}14684[14684] {pull}14800[14800] +- Add extra check on `ignore_non_existent_counters` flag if the PdhExpandWildCardPathW returns no errors but does not expand the counter path successfully in windows/perfmon metricset. {pull}14797[14797] +- Fix rds metricset from reporting same values for different instances. {pull}14702[14702] +- Closing handler after verifying the registry key in diskio metricset. {issue}14683[14683] {pull}14759[14759] +- Fix docker network stats when multiple interfaces are configured. {issue}14586[14586] {pull}14825[14825] +- Fix ListMetrics pagination in aws module. {issue}14926[14926] {pull}14942[14942] +- Fix CPU count in docker/cpu in cases where no `online_cpus` are reported {pull}15070[15070] + +[[release-notes-7.5.0]] +=== Beats version 7.5.0 +https://github.com/elastic/beats/compare/v7.4.1...v7.5.0[View commits] + +==== Breaking changes + +*Affecting all Beats* + +- By default, all Beats-created files and folders will have a umask of 0027 (on POSIX systems). {pull}14119[14119] + +*Filebeat* + +*Heartbeat* + +- JSON/Regex checks against HTTP bodies will only consider the first 100MiB of the HTTP body to prevent excessive memory usage. {pull}14223[14223] + +*Metricbeat* + +==== Bugfixes + +*Affecting all Beats* + +- Disable `add_kubernetes_metadata` if no matchers found. {pull}13709[13709] +- Better wording for xpack beats when the _xpack endpoint is not reachable. {pull}13771[13771] +- Kubernetes watcher at `add_kubernetes_metadata` fails with StatefulSets {pull}13905[13905] +- Fix panics that could result from invalid TLS certificates. This can affect Beats that connect over TLS or Beats that accept connections over TLS and validate client certificates. {pull}14146[14146] +- Fix memory leak in kubernetes autodiscover provider and add_kubernetes_metadata processor happening when pods are terminated without sending a delete event. {pull}14259[14259] +- Fix kubernetes `metaGenerator.ResourceMetadata` when parent reference controller is nil {issue}14320[14320] {pull}14329[14329] + +*Auditbeat* + +- Socket dataset: Fix start errors when IPv6 is disabled on the kernel. {issue}13953[13953] {pull}13966[13966] + +*Filebeat* + +- Fix a denial of service flaw when parsing malformed DSA public keys in Go. +If {filebeat} is configured to accept incoming TLS connections with client +authentication enabled, a remote attacker could cause the Beat to stop +processing events. (CVE-2019-17596) See https://www.elastic.co/community/security/ +- Fix timezone parsing of rabbitmq module ingest pipelines. {pull}13879[13879] +- Fix conditions and error checking of date processors in ingest pipelines that use `event.timezone` to parse dates. {pull}13883[13883] +- Fix timezone parsing of Cisco module ingest pipelines. {pull}13893[13893] +- Fix timezone parsing of logstash module ingest pipelines. {pull}13890[13890] +- Fix timezone parsing of iptables, mssql and panw module ingest pipelines. {pull}13926[13926] +- Fixed increased memory usage with large files when multiline pattern does not match. {issue}14068[14068] +- Fix azure fields names. {pull}14098[14098] {pull}14132[14132] +- Fix calculation of `network.bytes` and `network.packets` for bi-directional netflow events. {pull}14111[14111] +- Accept '-' as http.response.body.bytes in apache module. {pull}14137[14137] +- Fix timezone parsing of MySQL module ingest pipelines. {pull}14130[14130] +- Improve error message in s3 input when handleSQSMessage failed. {pull}14113[14113] +- Fix race condition in S3 input plugin. {pull}14359[14359] + +*Heartbeat* + +- Fix storage of HTTP bodies to work when JSON/Regex body checks are enabled. {pull}14223[14223] + +*Metricbeat* + +- Fix a denial of service flaw when parsing malformed DSA public keys in Go. +If {metricbeat} is configured to accept incoming TLS connections with client +authentication enabled, a remote attacker could cause the Beat to stop +processing events. (CVE-2019-17596) See https://www.elastic.co/community/security/ +- PdhExpandWildCardPathW will not expand counter paths in 32 bit windows systems, workaround will use a different function. {issue}12590[12590] {pull}12622[12622] +- Fix `docker.cpu.system.pct` calculation by using the reported number online cpus instead of the number of metrics per cpu. {pull}13691[13691] +- Change kubernetes.event.message to text {pull}13964[13964] +- Fix performance counter values for windows/perfmon metricset.{issue}14036[14036] {pull}14039[14039] {pull}14108[14108] +- Add FailOnRequired when applying schema and fix metric names in mongodb metrics metricset. {pull}14143[14143] +- Convert indexed ms-since-epoch timestamp fields in `elasticsearch/ml_job` metricset to ints from float64s. {issue}14220[14220] {pull}14222[14222] +- Fix ARN parsing function to work for ELB ARNs. {pull}14316[14316] +- Update azure configuration example. {issue}14224[14224] +- Limit some of the error messages to the logs only {issue}14317[14317] {pull}14327[14327] +- Fix cloudwatch metricset with names and dimensions in config. {issue}14376[14376] {pull}14391[14391] +- Fix marshaling of ms-since-epoch values in `elasticsearch/cluster_stats` metricset. {pull}14378[14378] + +*Packetbeat* + +- Fix parsing of the HTTP host header when it contains a port or an IPv6 address. {pull}14215[14215] + + +==== Added + +*Affecting all Beats* + +- Fail with error when autodiscover providers have no defined configs. {pull}13078[13078] +- Add autodetection mode for add_docker_metadata and enable it by default in included configuration files{pull}13374[13374] +- Add autodetection mode for add_kubernetes_metadata and enable it by default in included configuration files. {pull}13473[13473] +- Use less restrictive API to check if template exists. {pull}13847[13847] +- Do not check for alias when setup.ilm.check_exists is false. {pull}13848[13848] +- Add support for numeric time zone offsets in timestamp processor. {pull}13902[13902] +- Add condition to the config file template for add_kubernetes_metadata {pull}14056[14056] +- Marking Central Management deprecated. {pull}14018[14018] +- Add `keep_null` setting to allow Beats to publish null values in events. {issue}5522[5522] {pull}13928[13928] +- Add shared_credential_file option in aws related config for specifying credential file directory. {issue}14157[14157] {pull}14178[14178] +- Ensure that init containers are no longer tailed after they stop. {pull}14394[14394] +- Libbeat HTTP's Server can listen to a unix socket using the `unix:///tmp/hello.sock` syntax. {pull}13655[13655] +- Libbeat HTTP's Server can listen to a Windows named pipe using the `npipe:///hello` syntax. {pull}13655[13655] +- Adding new `Enterprise` license type to the licenser. {issue}14246[14246] + +*Auditbeat* + +- Socket: Add DNS enrichment. {pull}14004[14004] + +*Filebeat* + +- Add support for virtual host in Apache access logs {pull}12778[12778] +- Update CoreDNS module to populate ECS DNS fields. {issue}13320[13320] {pull}13505[13505] +- Parse query steps in PostgreSQL slowlogs. {issue}13496[13496] {pull}13701[13701] +- Add filebeat azure module with activitylogs, auditlogs, signinlogs filesets. {pull}13776[13776] +- Add support to set the document id in the json reader. {pull}5844[5844] +- Add input httpjson. {issue}13545[13545] {pull}13546[13546] +- Filebeat Netflow input: Remove beta label. {pull}13858[13858] +- Remove `event.timezone` from events that don't need it in some modules that support log formats with and without timezones. {pull}13918[13918] +- Add ExpandEventListFromField config option in the kafka input. {pull}13965[13965] +- Add ELB fileset to AWS module. {pull}14020[14020] +- Add module for MISP (Malware Information Sharing Platform). {pull}13805[13805] +- Add filebeat azure module with activitylogs, auditlogs, signinlogs filesets. {pull}13776[13776] {pull}14033[14033] {pull}14107[14107] +- Add support for all the ObjectCreated events in S3 input. {pull}14077[14077] +- Add `source.bytes` and `source.packets` for uni-directional netflow events. {pull}14111[14111] +- Add Kibana Dashboard for MISP module. {pull}14147[14147] +- Add support for gzipped files in S3 input {pull}13980[13980] +- Add Filebeat Azure Dashboards {pull}14127[14127] + + +*Heartbeat* +- Add non-privileged icmp on linux and darwin(mac). {pull}13795[13795] {issue}11498[11498] +- Allow `hosts` to be used to configure http monitors {pull}13703[13703] + +*Metricbeat* + +- Add refresh list of perf counters at every fetch {issue}13091[13091] +- Add proc/vmstat data to the system/memory metricset on linux {pull}13322[13322] +- Add support for NATS version 2. {pull}13601[13601] +- Add `docker.cpu.*.norm.pct` metrics for `cpu` metricset of Docker Metricbeat module. {pull}13695[13695] +- Add `instance` label by default when using Prometheus collector. {pull}13737[13737] +- Add azure module. {pull}13196[13196] {pull}13859[13859] {pull}13988[13988] +- Add Apache Tomcat module {pull}13491[13491] +- Add ECS `container.id` and `container.runtime` to kubernetes `state_container` metricset. {pull}13884[13884] +- Add `job` label by default when using Prometheus collector. {pull}13878[13878] +- Add `state_resourcequota` metricset for Kubernetes module. {pull}13693[13693] +- Add tags filter in ec2 metricset. {pull}13872[13872] {issue}13145[13145] +- Add cloud.account.id and cloud.account.name into events from aws module. {issue}13551[13551] {pull}13558[13558] +- Add `metrics_path` as known hint for autodiscovery {pull}13996[13996] +- Leverage KUBECONFIG when creating k8s client. {pull}13916[13916] +- Add ability to filter by tags for cloudwatch metricset. {pull}13758[13758] {issue}13145[13145] +- Release cloudwatch, s3_daily_storage, s3_request, sqs and rds metricset as GA. {pull}14114[14114] {issue}14059[14059] +- Add `elasticsearch/enrich` metricset. {pull}14243[14243] {issue}14221[14221] +- Add new dashboards for Azure vms, vm guest metrics, vm scale sets {pull}14000[14000] + +*Functionbeat* + +- Make `bulk_max_size` configurable in outputs. {pull}13493[13493] + +*Winlogbeat* + +- Fill `event.provider`. {pull}13937[13937] +- Add support for user management events to the Security module. {pull}13530[13530] + +==== Deprecated + +*Metricbeat* + +- `kubernetes.container.id` field for `state_container` is deprecated in favour of ECS `container.id` and `container.runtime`. {pull}13884[13884] + +[[release-notes-7.4.1]] +=== Beats version 7.4.1 +https://github.com/elastic/beats/compare/v7.4.0...v7.4.1[View commits] + +==== Breaking changes + +*Affecting all Beats* + +*Auditbeat* + +*Filebeat* + +*Heartbeat* + +*Journalbeat* + +*Metricbeat* + +*Packetbeat* + +*Winlogbeat* + +*Functionbeat* + +==== Bugfixes + +*Affecting all Beats* + +- Recover from panics in the javascript process and log details about the failure to aid in future debugging. {pull}13690[13690] +- Make the script processor concurrency-safe. {issue}13690[13690] {pull}13857[13857] + +*Auditbeat* + +*Filebeat* + +- Fixed early expiration of templates (Netflow v9 and IPFIX). {pull}13821[13821] +- Fixed bad handling of sequence numbers when multiple observation domains were exported by a single device (Netflow V9 and IPFIX). {pull}13821[13821] +- cisco asa and ftd filesets: Fix parsing of message 106001. {issue}13891[13891] {pull}13903[13903] +- Fix merging of fields specified in global scope with fields specified under an input's scope. {issue}3628[3628] {pull}13909[13909] +- Fix delay in enforcing close_renamed and close_removed options. {issue}13488[13488] {pull}13907[13907] +- Fix missing netflow fields in index template. {issue}13768[13768] {pull}13914[13914] +- Fix cisco module's asa and ftd filesets parsing of domain names where an IP address is expected. {issue}14034[14034] + +*Heartbeat* + +*Journalbeat* + +*Metricbeat* + +- Mark Kibana usage stats as collected only if API call succeeds. {pull}13881[13881] + +*Packetbeat* + +*Winlogbeat* + +*Functionbeat* + +==== Added + +*Affecting all Beats* + +*Auditbeat* + +*Filebeat* + +*Heartbeat* + +*Journalbeat* + +*Metricbeat* + +*Packetbeat* + +*Functionbeat* + +*Winlogbeat* + +==== Deprecated + +*Affecting all Beats* + +*Filebeat* + +*Heartbeat* + +*Journalbeat* + +*Metricbeat* + +*Packetbeat* + +*Winlogbeat* + +*Functionbeat* + +==== Known Issue + +*Journalbeat* + +[[release-notes-7.4.0]] +=== Beats version 7.4.0 +https://github.com/elastic/beats/compare/v7.3.1...v7.4.0[View commits] + +==== Breaking changes + +*Affecting all Beats* + +- Update to Golang 1.12.7. {pull}12931[12931] +- Remove `in_cluster` configuration parameter for Kuberentes, now in-cluster configuration is used only if no other kubeconfig is specified {pull}13051[13051] + +*Auditbeat* + +- Socket dataset: New implementation using Kprobes for finer-grained monitoring and UDP support. {pull}13058[13058] + +*Filebeat* + +- Fix a race condition in the TCP input when close the client socket. {pull}13038[13038] +- cisco/asa fileset: Renamed log.original to event.original and cisco.asa.list_id to cisco.asa.rule_name. {pull}13286[13286] +- cisco/asa fileset: Fix parsing of 302021 message code. {pull}13476[13476] + +*Metricbeat* + +- Add new Dashboard for PostgreSQL database stats {pull}13187[13187] +- Add new dashboard for CouchDB database {pull}13198[13198] +- Add new dashboard for Ceph cluster stats {pull}13216[13216] +- Add new dashboard for Aerospike database stats {pull}13217[13217] +- Add new dashboard for Couchbase cluster stats {pull}13212[13212] +- Add new dashboard for Prometheus server stats {pull}13126[13126] +- Add statistic option into cloudwatch metricset. If there is no statistic method specified, default is to collect Average, Sum, Maximum, Minimum and SampleCount. {issue}12370[12370] {pull}12840[12840] +- Fix rds metricset dashboard. {pull}13721[13721] + +*Functionbeat* + +- Separate management and functions in Functionbeat. {pull}12939[12939] + +==== Bugfixes + +*Affecting all Beats* + +- ILM: Use GET instead of HEAD when checking for alias to expose detailed error message. {pull}12886[12886] +- Fix unexpected stops on docker autodiscover when a container is restarted before `cleanup_timeout`. {issue}12962[12962] {pull}13127[13127] +- Fix some incorrect types and formats in field.yml files. {pull}13188[13188] +- Load DLLs only from Windows system directory. {pull}13234[13234] {pull}13384[13384] +- Fix mapping for kubernetes.labels and kubernetes.annotations in add_kubernetes_metadata. {issue}12638[12638] {pull}13226[13226] +- Fix case insensitive regular expressions not working correctly. {pull}13250[13250] + +*Auditbeat* + +- Host dataset: Export Host fields to gob encoder. {pull}12940[12940] + +*Filebeat* + +- Fix filebeat autodiscover fileset hint for container input. {pull}13296[13296] +- Fix incorrect references to index patterns in AWS and CoreDNS dashboards. {pull}13303[13303] +- Fix timezone parsing of system module ingest pipelines. {pull}13308[13308] +- Fix timezone parsing of elasticsearch module ingest pipelines. {pull}13367[13367] +- Change iis url path grok pattern from URIPATH to NOTSPACE. {issue}12710[12710] {pull}13225[13225] {issue}7951[7951] {pull}13378[13378] +- Add timezone information to apache error fileset. {issue}12772[12772] {pull}13304[13304] +- Fix timezone parsing of nginx module ingest pipelines. {pull}13369[13369] +- Allow path variables to be used in files loaded from modules.d. {issue}13184[13184] +- Fix incorrect field references in envoyproxy dashboard {issue}13420[13420] {pull}13421[13421] + +*Heartbeat* + +- Fix integer comparison on JSON responses. {pull}13348[13348] + +*Metricbeat* + +- Ramdisk is not filtered out when collecting disk performance counters in diskio metricset {issue}12814[12814] {pull}12829[12829] +- Fix redis key metricset dashboard references to index pattern. {pull}13303[13303] +- Check if fields in DBInstance is nil in rds metricset. {pull}13294[13294] {issue}13037[13037] +- Fix silent failures in kafka and prometheus module. {pull}13353[13353] {issue}13252[13252] +- Fix module-level fields in Kubernetes metricsets. {pull}13433[13433] {pull}13544[13544] +- Fix panic in Redis Key metricset when collecting information from a removed key. {pull}13426[13426] +- In the elasticsearch/node_stats metricset, if xpack is enabled, make parsing of ES node load average optional as ES on Windows doesn't report load average. {pull}12866[12866] +- Print errors that were being omitted in vSphere metricsets. {pull}12816[12816] +- Fix issue with aws cloudwatch module where dimensions and/or namespaces that contain space are not being parsed correctly {pull}13389[13389] +- Fix reporting empty events in cloudwatch metricset. {pull}13458[13458] +- Fix data race affecting config validation at startup. {issue}13005[13005] + +*Packetbeat* + +- Fix parsing the extended RCODE in the DNS parser. {pull}12805[12805] + +*Functionbeat* + +- Fix Cloudwatch logs timestamp to use timestamp of the log record instead of when the record was processed {pull}13291[13291] +- Look for the keystore under the correct path. {pull}13332[13332] + +==== Added + +*Affecting all Beats* + +- Add support for reading the `network.iana_number` field by default to the community_id processor. {pull}12701[12701] +- Add a check so alias creation explicitely fails if there is an index with the same name. {pull}13070[13070] +- Update kubernetes watcher to use official client-go libraries. {pull}13051[13051] +- Add support for unix epoch time values in the `timestamp` processor. {pull}13319[13319] +- add_host_metadata is now GA. {pull}13148[13148] +- Add an `ignore_missing` configuration option the `drop_fields` processor. {pull}13318[13318] +- Add `registered_domain` processor for deriving the registered domain from a given FQDN. {pull}13326[13326] +- Add support for RFC3339 time zone offsets in JSON output. {pull}13227[13227] +- Added `monitoring.cluster_uuid` setting to associate Beat data with specified ES cluster in Stack Monitoring UI. {pull}13182[13182] + +*Filebeat* + +- Add netflow dashboards based on Logstash netflow. {pull}12857[12857] +- Parse more fields from Elasticsearch slowlogs. {pull}11939[11939] +- Update module pipelines to enrich events with autonomous system fields. {pull}13036[13036] +- Add module for ingesting IBM MQ logs. {pull}8782[8782] +- Add S3 input to retrieve logs from AWS S3 buckets. {pull}12640[12640] {issue}12582[12582] +- Add aws module s3access metricset. {pull}13170[13170] {issue}12880[12880] +- Update Suricata module to populate ECS DNS fields and handle EVE DNS version 2. {issue}13320[13320] {pull}13329[13329] +- Update PAN-OS fileset to use the ECS NAT fields. {issue}13320[13320] {pull}13330[13330] +- Add fields to the Zeek DNS fileset for ECS DNS. {issue}13320[13320] {pull}13324[13324] +- Add container image in Kubernetes metadata {pull}13356[13356] {issue}12688[12688] +- Add module for ingesting Cisco FTD logs over syslog. {pull}13286[13286] + +*Heartbeat* + +- Record HTTP body metadata and optionally contents in `http.response.body.*` fields. {pull}13022[13022] + +*Metricbeat* + +- Add Kubernetes proxy dashboard to Kubernetes module {pull}12734[12734] +- Add Kubernetes controller manager dashboard to Kubernetes module {pull}12744[12744] +- Add metrics to kubernetes apiserver metricset. {pull}12922[12922] +- Add Kubernetes scheduler dashboard to Kubernetes module {pull}12749[12749] +- Collect client provided name for rabbitmq connection. {issue}12851[12851] {pull}12852[12852] +- Add support to load default aws config file to get credentials. {pull}12727[12727] {issue}12708[12708] +- Add statistic option into cloudwatch metricset. {issue}12370[12370] {pull}12840[12840] +- Add support for kubernetes cronjobs {pull}13001[13001] +- Add cgroup memory stats to docker/memory metricset {pull}12916[12916] +- Add AWS elb metricset. {pull}12952[12952] {issue}11701[11701] +- Add AWS ebs metricset. {pull}13167[13167] {issue}11699[11699] +- Add `metricset.period` field with the configured fetching period. {pull}13242[13242] {issue}12616[12616] +- Add rate metrics for ec2 metricset. {pull}13203[13203] +- Add Performance metricset to Oracle module {pull}12547[12547] +- Use DefaultMetaGeneratorConfig in MetadataEnrichers to initialize configurations {pull}13414[13414] +- Add module for statsd. {pull}13109[13109] + +*Packetbeat* + +- Update DNS protocol plugin to produce events with ECS fields for DNS. {issue}13320[13320] {pull}13354[13354] + +*Functionbeat* + +- Add timeout option to reference configuration. {pull}13351[13351] +- Configurable tags for Lambda functions. {pull}13352[13352] +- Add input for Cloudwatch logs through Kinesis. {pull}13317[13317] +- Enable Logstash output. {pull}13345[13345] + +*Winlogbeat* + +- Add support for event ID 4634 and 4647 to the Security module. {pull}12906[12906] +- Add `network.community_id` to Sysmon network events (event ID 3). {pull}13034[13034] +- Add `event.module` to Winlogbeat modules. {pull}13047[13047] +- Add `event.category: process` and `event.type: process_start/process_end` to Sysmon process events (event ID 1 and 5). {pull}13047[13047] +- Add support for event ID 4672 to the Security module. {pull}12975[12975] +- Add support for event ID 22 (DNS query) to the Sysmon module. {pull}12960[12960] +- Add support for event ID 4634 and 4647 to the Security module. {pull}12906[12906] +- Add `network.community_id` to Sysmon network events (event ID 3). {pull}13034[13034] +- Add `event.module` to Winlogbeat modules. {pull}13047[13047] +- Add `event.category: process` and `event.type: process_start/process_end` to Sysmon process events (event ID 1 and 5). {pull}13047[13047] +- Add support for event ID 4672 to the Security module. {pull}12975[12975] +- Add support for event ID 22 (DNS query) to the Sysmon module. {pull}12960[12960] +- Add certain winlog.event_data.* fields to the index template. {issue}13700[13700] {pull}13704[13704] + +[[release-notes-7.3.2]] +=== Beats version 7.3.2 +https://github.com/elastic/beats/compare/v7.3.1...v7.3.2[View commits] + +==== Bugfixes + +*Filebeat* + +- Fix filebeat autodiscover fileset hint for container input. {pull}13296[13296] +- Fix timezone parsing of system module ingest pipelines. {pull}13308[13308] +- Fix timezone parsing of elasticsearch module ingest pipelines. {pull}13367[13367] +- Fix timezone parsing of nginx module ingest pipelines. {pull}13369[13369] + +*Metricbeat* + +- Fix module-level fields in Kubernetes metricsets. {pull}13433[13433] {pull}13544[13544] +- Fix panic in Redis Key metricset when collecting information from a removed key. {pull}13426[13426] + +[[release-notes-7.3.1]] +=== Beats version 7.3.1 +https://github.com/elastic/beats/compare/v7.3.0...v7.3.1[View commits] + +==== Bugfixes + +*Affecting all Beats* + +- Fix install-service.ps1's ability to set Windows service's delay start configuration. {pull}13173[13173] +- Fix `decode_base64_field` processor. {pull}13092[13092], {pull}13144[13144] + +*Filebeat* + +- Fix multiline pattern in Postgres which was too permissive. {issue}12078[12078] {pull}13069[13069] + +*Metricbeat* + +- Fix `logstash/node_stats` metricset to also collect `logstash_stats.events.duration_in_millis` field when `xpack.enabled: true` is set. {pull}13082[13082] +- Fix `logstash/node` metricset to also collect `logstash_state.pipeline.representation.{type,version,hash}` fields when `xpack.enabled: true` is set. {pull}13133[13133] + +==== Added + +*Metricbeat* + +- Make the `beat` module defensive about determining ES cluster UUID when `xpack.enabled: true` is set. {pull}13020[13020] + +[[release-notes-7.3.0]] +=== Beats version 7.3.0 +https://github.com/elastic/beats/compare/v7.2.0...v7.3.0[View commits] + +==== Breaking changes + +*Affecting all Beats* + +- Update to ECS 1.0.1. {pull}12284[12284] {pull}12317[12317] +- Default of output.kafka.metadata.full is set to false by now. This reduced the amount of metadata to be queried from a kafka cluster. {pull}12738[12738] + +*Filebeat* + +- `convert_timezone` option is removed and locale is always added to the event so timezone is used when parsing the timestamp, this behaviour can be overriden with processors. {pull}12410[12410] + +==== Bugfixes + +*Affecting all Beats* + +- Fix typo in TLS renegotiation configuration and setting the option correctly {issue}10871[10871], {pull}12354[12354] +- Add configurable bulk_flush_frequency in kafka output. {pull}12254[12254] +- Fixed setting bulk max size in kafka output. {pull}12254[12254] +- Add additional nil pointer checks to Docker client code to deal with vSphere Integrated Containers {pull}12628[12628] +- Fix seccomp policy preventing some features to function properly on 32bit Linux systems. {issue}12990[12990] {pull}13008[13008] + +*Auditbeat* + +- Package dataset: Close librpm handle. {pull}12215[12215] +- Package dataset: Improve dpkg parsing. {pull}12325[12325] +- Host dataset: Fix reboot detection logic. {pull}12591[12591] +- Add syscalls used by librpm for the system/package dataset to the default Auditbeat seccomp policy. {issue}12578[12578] {pull}12617[12617] +- Host dataset: Export Host fields to gob encoder. {pull}12940[12940] + +*Filebeat* + +- Parse timezone in PostgreSQL logs as part of the timestamp {pull}12338[12338] +- When TLS is configured for the TCP input and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584] +- Syslog input will now omit the `process` object from events if it is empty. {pull}12700[12700] +- Apply `max_message_size` to incoming message buffer. {pull}11966[11966] + +*Heartbeat* + + +*Journalbeat* + +- Iterate over journal correctly, so no duplicate entries are sent. {pull}12716[12716] +- Preserve host name when reading from remote journal. {pull}12714[12714] + +*Metricbeat* + +- Refactored Windows perfmon metricset: replaced method to retrieve counter paths with PdhExpandWildCardPathW, separated code by responsibility, removed unused functions {pull}12212[12212] +- Validate that kibana/status metricset cannot be used when xpack is enabled. {pull}12264[12264] +- In the kibana/stats metricset, only log error (don't also index it) if xpack is enabled. {pull}12265[12265] +- Fix an issue listing all processes when run under Windows as a non-privileged user. {issue}12301[12301] {pull}12475[12475] +- When TLS is configured for the http metricset and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584] +- Reuse connections in PostgreSQL metricsets. {issue}12504[12504] {pull}12603[12603] +- PdhExpandWildCardPathW will not expand counter paths in 32 bit windows systems, workaround will use a different function.{issue}12590[12590]{pull}12622[12622] +- Print errors that were being omitted in vSphere metricsets {pull}12816[12816] +- In the elasticsearch/node_stats metricset, if xpack is enabled, make parsing of ES node load average optional as ES on Windows doesn't report load average. {pull}12866[12866] +- Fix incoherent behaviour in redis key metricset when keyspace is specified both in host URL and key pattern {pull}12913[12913] +- Fix connections leak in redis module {pull}12914[12914] {pull}12950[12950] + +*Packetbeat* + + +==== Added + +*Affecting all Beats* + +- Add `proxy_disable` output flag to explicitly ignore proxy environment variables. {issue}11713[11713] {pull}12243[12243] +- Processor `add_cloud_metadata` adds fields `cloud.account.id` and `cloud.image.id` for AWS EC2. {pull}12307[12307] +- Add `decode_base64_field` processor for decoding base64 field. {pull}11914[11914] +- Add aws overview dashboard. {issue}11007[11007] {pull}12175[12175] +- Add `decompress_gzip_field` processor. {pull}12733[12733] +- Add `timestamp` processor for parsing time fields. {pull}12699[12699] +- Add Oracle Tablespaces Dashboard {pull}12736[12736] +- Add `proxy_disable` output flag to explicitly ignore proxy environment variables. {issue}11713[11713] {pull}12243[12243] + +*Auditbeat* + + +*Filebeat* + +- Add timeouts on communication with docker daemon. {pull}12310[12310] +- Add specific date processor to convert timezones so same pipeline can be used when convert_timezone is enabled or disabled. {pull}12253[12253] +- Add MSSQL module {pull}12079[12079] +- Add ISO8601 date parsing support for system module. {pull}12568[12568] {pull}12578[12579] +- Update Kubernetes deployment manifest to use `container` input. {pull}12632[12632] +- Add `google-pubsub` input type for consuming messages from a Google Cloud Pub/Sub topic subscription. {pull}12746[12746] +- Add module for ingesting Cisco IOS logs over syslog. {pull}12748[12748] +- Add module for ingesting Google Cloud VPC flow logs. {pull}12747[12747] +- Report host metadata for Filebeat logs in Kubernetes. {pull}12790[12790] + +*Metricbeat* + +- Add overview dashboard to Consul module {pull}10665[10665] +- New fields were added in the mysql/status metricset. {pull}12227[12227] +- Add Kubernetes metricset `proxy`. {pull}12312[12312] +- Always report Pod UID in the `pod` metricset. {pull}12345[12345] +- Add Vsphere Virtual Machine operating system to `os` field in Vsphere virtualmachine module. {pull}12391[12391] +- Add CockroachDB module. {pull}12467[12467] +- Add support for metricbeat modules based on existing modules (a.k.a. light modules) {issue}12270[12270] {pull}12465[12465] +- Add a system/entropy metricset {pull}12450[12450] +- Add kubernetes metricset `controllermanager` {pull}12409[12409] +- Allow redis URL format in redis hosts config. {pull}12408[12408] +- Add tags into ec2 metricset. {issue}[12263]12263 {pull}12372[12372] +- Add kubernetes metricset `scheduler` {pull}12521[12521] +- Add Kubernetes scheduler dashboard to Kubernetes module {pull}12749[12749] +- Add `beat` module. {pull}12181[12181] {pull}12615[12615] +- Collect tags for cloudwatch metricset in aws module. {issue}[12263]12263 {pull}12480[12480] +- Add AWS RDS metricset. {pull}11620[11620] {issue}10054[10054] +- Add Oracle Module {pull}11890[11890] +- Add Kubernetes proxy dashboard to Kubernetes module {pull}12734[12734] +- Add Kubernetes controller manager dashboard to Kubernetes module {pull}12744[12744] + +*Functionbeat* + +- Export automation templates used to create functions. {pull}11923[11923] +- Configurable Amazon endpoint. {pull}12369[12369] + +==== Deprecated + +*Filebeat* + +- `postgresql.log.timestamp` field is deprecated in favour of `@timestamp`. {pull}12338[12338] + +[[release-notes-7.2.1]] +=== Beats version 7.2.1 +https://github.com/elastic/beats/compare/v7.2.0...v7.2.1[View commits] + +==== Bugfixes + +*Affecting all Beats* + +- Fix Central Management enroll under Windows {issue}12797[12797] {pull}12799[12799] +- Fixed a crash under Windows when fetching processes information. {pull}12833[12833] + +*Filebeat* + +- Add support for client addresses with port in Apache error logs {pull}12695[12695] +- Load correct pipelines when system module is configured in modules.d. {pull}12340[12340] + +*Metricbeat* + +- Fix wrong uptime reporting by system/uptime metricset under Windows. {pull}12915[12915] + +*Packetbeat* + +- Limit memory usage of Redis replication sessions. {issue}12657[12657] + +[[release-notes-7.2.0]] +=== Beats version 7.2.0 +https://github.com/elastic/beats/compare/v7.1.1...v7.2.0[View commits] + +==== Breaking changes + +*Affecting all Beats* + +- Update to Golang 1.12.4. {pull}11782[11782] + +*Auditbeat* + +- Auditd module: Normalized value of `event.category` field from `user-login` to `authentication`. {pull}11432[11432] +- Auditd module: Unset `auditd.session` and `user.audit.id` fields are removed from audit events. {issue}11431[11431] {pull}11815[11815] +- Socket dataset: Exclude localhost by default {pull}11993[11993] + +*Filebeat* + +- Add read_buffer configuration option. {pull}11739[11739] + +*Heartbeat* + +- Removed the `add_host_metadata` and `add_cloud_metadata` processors from the default config. These don't fit well with ECS for Heartbeat and were rarely used. + +*Journalbeat* + +*Metricbeat* + +- Add new option `OpMultiplyBuckets` to scale histogram buckets to avoid decimal points in final events {pull}10994[10994] +- system/raid metricset now uses /sys/block instead of /proc/mdstat for data. {pull}11613[11613] + +*Packetbeat* + +- Add support for mongodb opcode 2013 (OP_MSG). {issue}6191[6191] {pull}8594[8594] +- NFSv4: Always use opname `ILLEGAL` when failed to match request to a valid nfs operation. {pull}11503[11503] + +*Winlogbeat* + +*Functionbeat* + +==== Bugfixes + +*Affecting all Beats* + +- Ensure all beat commands respect configured settings. {pull}10721[10721] +- Add missing fields and test cases for libbeat add_kubernetes_metadata processor. {issue}11133[11133], {pull}11134[11134] +- decode_json_field: process objects and arrays only {pull}11312[11312] +- decode_json_field: do not process arrays when flag not set. {pull}11318[11318] +- Report faulting file when config reload fails. {pull}11304[11304] +- Fix a typo in libbeat/outputs/transport/client.go by updating `c.conn.LocalAddr()` to `c.conn.RemoteAddr()`. {pull}11242[11242] +- Management configuration backup file will now have a timestamps in their name. {pull}11034[11034] +- [CM] Parse enrollment_token response correctly {pull}11648[11648] +- Not hiding error in case of http failure using elastic fetcher {pull}11604[11604] +- Escape BOM on JsonReader before trying to decode line {pull}11661[11661] +- Fix matching of string arrays in contains condition. {pull}11691[11691] +- Replace wmi queries with win32 api calls as they were consuming CPU resources {issue}3249[3249] and {issue}11840[11840] +- Fix queue.spool.write.flush.events config type. {pull}12080[12080] +- Fixed a memory leak when using the add_process_metadata processor under Windows. {pull}12100[12100] +- Fix of docker json parser for missing "log" jsonkey in docker container's log {issue}11464[11464] +- Fixed Beat ID being reported by GET / API. {pull}12180[12180] +- Add host.os.codename to fields.yml. {pull}12261[12261] +- Fix `@timestamp` being duplicated in events if `@timestamp` is set in a + processor (or by any code utilizing `PutValue()` on a `beat.Event`). +- Fix leak in script processor when using Javascript functions in a processor chain. {pull}12600[12600] + +*Auditbeat* + +- Process dataset: Fixed a memory leak under Windows. {pull}12100[12100] +- Login dataset: Fix re-read of utmp files. {pull}12028[12028] +- Package dataset: Fixed a crash inside librpm after Auditbeat has been running for a while. {issue}12147[12147] {pull}12168[12168] +- Fix formatting of config files on macOS and Windows. {pull}12148[12148] +- Fix direction of incoming IPv6 sockets. {pull}12248[12248] +- Package dataset: Auto-detect package directories. {pull}12289[12289] +- System module: Start system module without host ID. {pull}12373[12373] + +*Filebeat* + +- Add support for Cisco syslog format used by their switch. {pull}10760[10760] +- Cover empty request data, url and version in Apache2 module{pull}10730[10730] +- Fix registry entries not being cleaned due to race conditions. {pull}10747[10747] +- Improve detection of file deletion on Windows. {pull}10747[10747] +- Add missing Kubernetes metadata fields to Filebeat CoreDNS module, and fix a documentation error. {pull}11591[11591] +- Reduce memory usage if long lines are truncated to fit `max_bytes` limit. The line buffer is copied into a smaller buffer now. This allows the runtime to release unused memory earlier. {pull}11524[11524] +- Fix memory leak in Filebeat pipeline acker. {pull}12063[12063] +- Fix goroutine leak caused on initialization failures of log input. {pull}12125[12125] +- Fix goroutine leak on non-explicit finalization of log input. {pull}12164[12164] +- Require client_auth by default when ssl is enabled for tcp input {pull}12333[12333] +- Fix timezone offset parsing in system/syslog. {pull}12529[12529] + +*Heartbeat* + +- Fix NPEs / resource leaks when executing config checks. {pull}11165[11165] +- Fix duplicated IPs on `mode: all` monitors. {pull}12458[12458] + +*Journalbeat* + +- Use backoff when no new events are found. {pull}11861[11861] + +*Metricbeat* + +- Change diskio metrics retrieval method (only for Windows) from wmi query to DeviceIOControl function using the IOCTL_DISK_PERFORMANCE control code {pull}11635[11635] +- Call GetMetricData api per region instead of per instance. {issue}11820[11820] {pull}11882[11882] +- Update documentation with cloudwatch:ListMetrics permission. {pull}11987[11987] +- Check permissions in system socket metricset based on capabilities. {pull}12039[12039] +- Get process information from sockets owned by current user when system socket metricset is run without privileges. {pull}12039[12039] +- Avoid generating hints-based configuration with empty hosts when no exposed port is suitable for the hosts hint. {issue}8264[8264] {pull}12086[12086] +- Fixed a socket leak in the postgresql module under Windows when SSL is disabled on the server. {pull}11393[11393] +- Change some field type from scaled_float to long in aws module. {pull}11982[11982] +- Fixed RabbitMQ `queue` metricset gathering when `consumer_utilisation` is set empty at the metrics source {pull}12089[12089] +- Fix direction of incoming IPv6 sockets. {pull}12248[12248] +- Ignore prometheus metrics when their values are NaN or Inf. {pull}12084[12084] {issue}10849[10849] +- Require client_auth by default when ssl is enabled for module http metricset server{pull}12333[12333] +- The `elasticsearch/index_summary` metricset gracefully handles an empty Elasticsearch cluster when `xpack.enabled: true` is set. {pull}12489[12489] {issue}12487[12487] + +*Packetbeat* + +- Prevent duplicate packet loss error messages in HTTP events. {pull}10709[10709] +- Fixed a memory leak when using process monitoring under Windows. {pull}12100[12100] +- Improved debug logging efficiency in PGQSL module. {issue}12150[12150] + +*Winlogbeat* + +*Functionbeat* + +- Fix function name reference for Kinesis streams in CloudFormation templates {pull}11646[11646] + +==== Added + +*Affecting all Beats* + +- Add an option to append to existing logs rather than always rotate on start. {pull}11953[11953] +- Add `network` condition to processors for matching IP addresses against CIDRs. {pull}10743[10743] +- Add if/then/else support to processors. {pull}10744[10744] +- Add `community_id` processor for computing network flow hashes. {pull}10745[10745] +- Add output test to kafka output {pull}10834[10834] +- Gracefully shut down on SIGHUP {pull}10704[10704] +- New processor: `copy_fields`. {pull}11303[11303] +- Add `error.message` to events when `fail_on_error` is set in `rename` and `copy_fields` processors. {pull}11303[11303] +- New processor: `truncate_fields`. {pull}11297[11297] +- Allow a beat to ship monitoring data directly to an Elasticsearch monitoring clsuter. {pull}9260[9260] +- Updated go-seccomp-bpf library to v1.1.0 which updates syscall lists for Linux v5.0. {pull}NNNN[NNNN] +- Add `add_observer_metadata` processor. {pull}11394[11394] +- Add `decode_csv_fields` processor. {pull}11753[11753] +- Add `convert` processor for converting data types of fields. {issue}8124[8124] {pull}11686[11686] +- New `extract_array` processor. {pull}11761[11761] +- Add number of goroutines to reported metrics. {pull}12135[12135] + +*Auditbeat* + +- Auditd module: Add `event.outcome` and `event.type` for ECS. {pull}11432[11432] +- Process: Add file hash of process executable. {pull}11722[11722] +- Socket: Add network.transport and network.community_id. {pull}12231[12231] +- Host: Fill top-level host fields. {pull}12259[12259] + +*Filebeat* + +- Add more info to message logged when a duplicated symlink file is found {pull}10845[10845] +- Add option to configure docker input with paths {pull}10687[10687] +- Add Netflow module to enrich flow events with geoip data. {pull}10877[10877] +- Set `event.category: network_traffic` for Suricata. {pull}10882[10882] +- Allow custom default settings with autodiscover (for example, use of CRI paths for logs). {pull}12193[12193] +- Allow to disable hints based autodiscover default behavior (fetching all logs). {pull}12193[12193] +- Change Suricata module pipeline to handle `destination.domain` being set if a reverse DNS processor is used. {issue}10510[10510] +- Add the `network.community_id` flow identifier to field to the IPTables, Suricata, and Zeek modules. {pull}11005[11005] +- New Filebeat coredns module to ingest coredns logs. It supports both native coredns deployment and coredns deployment in kubernetes. {pull}11200[11200] +- New module for Cisco ASA logs. {issue}9200[9200] {pull}11171[11171] +- Added support for Cisco ASA fields to the netflow input. {pull}11201[11201] +- Configurable line terminator. {pull}11015[11015] +- Add Filebeat envoyproxy module. {pull}11700[11700] +- Add apache2(httpd) log path (`/var/log/httpd`) to make apache2 module work out of the box on Redhat-family OSes. {issue}11887[11887] {pull}11888[11888] +- Add support to new MongoDB additional diagnostic information {pull}11952[11952] +- New module `panw` for Palo Alto Networks PAN-OS logs. {pull}11999[11999] +- Add RabbitMQ module. {pull}12032[12032] +- Add new `container` input. {pull}12162[12162] + +*Heartbeat* + +- Enable `add_observer_metadata` processor in default config. {pull}11394[11394] + +*Journalbeat* + +*Metricbeat* + +- Add AWS SQS metricset. {pull}10684[10684] {issue}10053[10053] +- Add AWS s3_request metricset. {pull}10949[10949] {issue}10055[10055] +- Add s3_daily_storage metricset. {pull}10940[10940] {issue}10055[10055] +- Add `coredns` metricbeat module. {pull}10585[10585] +- Add SSL support for Metricbeat HTTP server. {pull}11482[11482] {issue}11457[11457] +- The `elasticsearch.index` metricset (with `xpack.enabled: true`) now collects `refresh.external_total_time_in_millis` fields from Elasticsearch. {pull}11616[11616] +- Allow module configurations to have variants {pull}9118[9118] +- Add `timeseries.instance` field calculation. {pull}10293[10293] +- Added new disk states and raid level to the system/raid metricset. {pull}11613[11613] +- Added `path_name` and `start_name` to service metricset on windows module {issue}8364[8364] {pull}11877[11877] +- Add check on object name in the counter path if the instance name is missing {issue}6528[6528] {pull}11878[11878] +- Add AWS cloudwatch metricset. {pull}11798[11798] {issue}11734[11734] +- Add `regions` in aws module config to specify target regions for querying cloudwatch metrics. {issue}11932[11932] {pull}11956[11956] +- Keep `etcd` followers members from reporting `leader` metricset events {pull}12004[12004] +- Add validation for elasticsearch and kibana modules' metricsets when xpack.enabled is set to true. {pull}12386[12386] + +*Packetbeat* + +*Functionbeat* + +- New options to configure roles and VPC. {pull}11779[11779] + +*Winlogbeat* + +- Add support for reading from .evtx files. {issue}4450[4450] + +==== Deprecated + +*Affecting all Beats* + +*Filebeat* + +- `docker` input is deprecated in favour `container`. {pull}12162[12162] + +*Heartbeat* + +*Journalbeat* + +*Metricbeat* + +*Packetbeat* + +*Winlogbeat* + +*Functionbeat* + +==== Known Issue + +*Journalbeat* + [[release-notes-7.1.1]] === Beats version 7.1.1 https://github.com/elastic/beats/compare/v7.1.0...v7.1.1[View commits] @@ -818,12 +1702,105 @@ https://github.com/elastic/beats/compare/v6.5.0...v7.0.0-alpha1[View commits] - Added support to calculate certificates' fingerprints (MD5, SHA-1, SHA-256). {issue}8180[8180] - Support new TLS version negotiation introduced in TLS 1.3. {issue}8647[8647]. +[[release-notes-6.8.3]] +=== Beats version 6.8.3 +https://github.com/elastic/beats/compare/v6.8.2...v6.8.3[View commits + +==== Bugfixes + +*Journalbeat* + +- Iterate over journal correctly, so no duplicate entries are sent. {pull}12716[12716] + +*Metricbeat* + +- Fix panic in Redis Key metricset when collecting information from a removed key. {pull}13426[13426] + +==== Added + +*Metricbeat* + +- Remove _nodes field from under cluster_stats as it's not being used. {pull}13010[13010] +- Collect license expiry date fields as well. {pull}11652[11652] + +[[release-notes-6.8.2]] +=== Beats version 6.8.2 +https://github.com/elastic/beats/compare/v6.8.1...v6.8.2[View commits] + +==== Bugfixes + +*Auditbeat* + +- Process dataset: Do not show non-root warning on Windows. {pull}12740[12740] + +*Filebeat* + +- Skipping unparsable log entries from docker json reader {pull}12268[12268] + +*Packetbeat* + +- Limit memory usage of Redis replication sessions. {issue}12657[12657 + +[[release-notes-6.8.1]] +=== Beats version 6.8.1 +https://github.com/elastic/beats/compare/v6.8.0...v6.8.1[View commits] + +==== Bugfixes + +*Affecting all Beats* + +- Fixed a memory leak when using the add_process_metadata processor under Windows. {pull}12100[12100] + +*Auditbeat* + +- Package dataset: Log error when Homebrew is not installed. {pull}11667[11667] +- Process dataset: Fixed a memory leak under Windows. {pull}12100[12100] +- Login dataset: Fix re-read of utmp files. {pull}12028[12028] +- Package dataset: Fixed a crash inside librpm after Auditbeat has been running for a while. {issue}12147[12147] {pull}12168[12168] +- Fix direction of incoming IPv6 sockets. {pull}12248[12248] +- Package dataset: Auto-detect package directories. {pull}12289[12289] +- System module: Start system module without host ID. {pull}12373[12373] +- Host dataset: Fix reboot detection logic. {pull}12591[12591] + +*Filebeat* + +- Fix goroutine leak happening when harvesters are dynamically stopped. {pull}11263[11263] +- Fix initialization of the TCP input logger. {pull}11605[11605] +- Fix goroutine leak caused on initialization failures of log input. {pull}12125[12125] +- Fix memory leak in Filebeat pipeline acker. {pull}12063[12063] +- Fix goroutine leak on non-explicit finalization of log input. {pull}12164[12164] +- When TLS is configured for the TCP input and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584] + +*Metricbeat* + +- Avoid generating hints-based configuration with empty hosts when no exposed port is suitable for the hosts hint. {issue}8264[8264] {pull}12086[12086] +- Fix direction of incoming IPv6 sockets. {pull}12248[12248] +- Validate that kibana/status metricset cannot be used when xpack is enabled. {pull}12264[12264] +- In the kibana/stats metricset, only log error (don't also index it) if xpack is enabled. {pull}12353[12353] +- The `elasticsearch/index_summary` metricset gracefully handles an empty Elasticsearch cluster when `xpack.enabled: true` is set. {pull}12489[12489] {issue}12487[12487] +- When TLS is configured for the http metricset and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584] + +*Packetbeat* + +- Fixed a memory leak when using process monitoring under Windows. {pull}12100[12100] +- Improved debug logging efficiency in PGQSL module. {issue}12150[12150] + +==== Added + +*Auditbeat* + +- Add support to the system package dataset for the SUSE OS family. {pull}11634[11634] + +*Metricbeat* + +- Add validation for elasticsearch and kibana modules' metricsets when xpack.enabled is set to true. {pull}12386[12386] + [[release-notes-6.8.0]] === Beats version 6.8.0 * Updates to support changes to licensing of security features. + -Some Elastic Stack security features, such as encrypted communications, file and native authentication, and +Some Elastic Stack security features, such as encrypted communications, file and native authentication, and role-based access control, are now available in more subscription levels. For details, see https://www.elastic.co/subscriptions. [[release-notes-6.7.2]] diff --git a/CHANGELOG.next.asciidoc b/CHANGELOG.next.asciidoc index dbb12aa738b3..6f43061131f3 100644 --- a/CHANGELOG.next.asciidoc +++ b/CHANGELOG.next.asciidoc @@ -10,628 +10,99 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d *Affecting all Beats* -- Update to Golang 1.12.1. {pull}11330[11330] -- Update to Golang 1.12.4. {pull}11782[11782] -- Update to ECS 1.0.1. {pull}12284[12284] {pull}12317[12317] -- Default of output.kafka.metadata.full is set to false by now. This reduced the amount of metadata to be queried from a kafka cluster. {pull}12738[12738] -- Fixed a crash under Windows when fetching processes information. {pull}12833[12833] -- Update to Golang 1.12.7. {pull}12931[12931] -- Remove `in_cluster` configuration parameter for Kuberentes, now in-cluster configuration is used only if no other kubeconfig is specified {pull}13051[13051] -- Disable Alibaba Cloud and Tencent Cloud metadata providers by default. {pull}13812[12812] -- Libbeat HTTP's Server can listen to a unix socket using the `unix:///tmp/hello.sock` syntax. {pull}13655[13655] -- Libbeat HTTP's Server can listen to a Windows named pipe using the `npipe:///hello` syntax. {pull}13655[13655] -- By default, all Beats-created files and folders will have a umask of 0027 (on POSIX systems). {pull}14119[14119] -- Adding new `Enterprise` license type to the licenser. {issue}14246[14246] -- Change wording when we fail to load a CA file to the cert pool. {issue}14309[14309] -- Allow Metricbeat's beat module to read monitoring information over a named pipe or unix domain socket. {pull}14558[14558] -- Remove version information from default ILM policy for improved upgrade experience on custom policies. {pull}14745[14745] -- Running `setup` cmd respects `setup.ilm.overwrite` setting for improved support of custom policies. {pull}14741[14741] -- Libbeat: Do not overwrite agent.*, ecs.version, and host.name. {pull}14407[14407] -- Libbeat: Cleanup the x-pack licenser code to use the new license endpoint and the new format. {pull}15091[15091] -- Users can now specify `monitoring.cloud.*` to override `monitoring.elasticsearch.*` settings. {issue}14399[14399] {pull}15254[15254] -- Refactor metadata generator to support adding metadata across resources {pull}14875[14875] -- Update to ECS 1.4.0. {pull}14844[14844] *Auditbeat* -- Auditd module: Normalized value of `event.category` field from `user-login` to `authentication`. {pull}11432[11432] -- Auditd module: Unset `auditd.session` and `user.audit.id` fields are removed from audit events. {issue}11431[11431] {pull}11815[11815] -- Socket dataset: Exclude localhost by default {pull}11993[11993] -- Socket dataset: New implementation using Kprobes for finer-grained monitoring and UDP support. {pull}13058[13058] *Filebeat* -- Add Filebeat Azure Dashboards {pull}14127[14127] -- Add read_buffer configuration option. {pull}11739[11739] -- `convert_timezone` option is removed and locale is always added to the event so timezone is used when parsing the timestamp, this behaviour can be overriden with processors. {pull}12410[12410] -- Fix a race condition in the TCP input when close the client socket. {pull}13038[13038] -- cisco/asa fileset: Renamed log.original to event.original and cisco.asa.list_id to cisco.asa.rule_name. {pull}13286[13286] -- cisco/asa fileset: Fix parsing of 302021 message code. {pull}13476[13476] -- google pubsub & httpjson inputs: HTTP User agent is now `Elastic-Heartbeat/Version` instead of `Elastic Heartbeat/Version` to stay RFC compliant. {pull}14748[14748] -- CEF extensions are now mapped to the data types defined in the CEF guide. {pull}14342[14342] -- Remove --machine-learning from setup subcommand. {pull}14705[14705] *Heartbeat* -- Removed the `add_host_metadata` and `add_cloud_metadata` processors from the default config. These don't fit well with ECS for Heartbeat and were rarely used. -- Fixed/altered redirect behavior. `max_redirects` now defaults to 0 (no redirects). Following redirects now works across hosts, but some timing fields will not be reported. {pull}14125[14125] -- Removed `host.name` field that should never have been included. Heartbeat uses `observer.*` fields instead. {pull}14140[14140] -- Changed default user-agent to be `Elastic-Heartbeat/VERSION (PLATFORM_INFO)` as the current default `Go-http-client/1.1` is often blacklisted. {pull}14291[14291] -- JSON/Regex checks against HTTP bodies will only consider the first 100MiB of the HTTP body to prevent excessive memory usage. {pull}14223[pull] -- Heartbeat now starts monitors scheduled with the '@every X' syntax instantaneously on startup, rather than waiting for the given interval to pass before running them. {pull}14890[14890] *Journalbeat* -- Remove broken dashboard. {pull}15288[15288] *Metricbeat* -- Add new dashboards for Azure vms, vm guest metrics, vm scale sets {pull}14000[14000] -- Add new Dashboard for PostgreSQL database stats {pull}13187[13187] -- Add new dashboard for CouchDB database {pull}13198[13198] -- Add new dashboard for Ceph cluster stats {pull}13216[13216] -- Add new dashboard for Aerospike database stats {pull}13217[13217] -- Add new dashboard for Couchbase cluster stats {pull}13212[13212] -- Add new dashboard for Prometheus server stats {pull}13126[13126] -- Add new dashboard for VSphere host cluster and virtual machine {pull}14135[14135] -- Add new option `OpMultiplyBuckets` to scale histogram buckets to avoid decimal points in final events {pull}10994[10994] -- system/raid metricset now uses /sys/block instead of /proc/mdstat for data. {pull}11613[11613] -- kubernetes.container.cpu.limit.cores and kubernetes.container.cpu.requests.cores are now floats. {issue}11975[11975] -- Add statistic option into cloudwatch metricset. If there is no statistic method specified, default is to collect Average, Sum, Maximum, Minimum and SampleCount. {issue}12370[12370] {pull}12840[12840] -- Update cloudwatch metricset mapping for both metrics and dimensions. {pull}15245[15245] -- Add sql module that fetches metrics from a SQL database {pull}13257[13257] *Packetbeat* -- Add dns.question.subdomain and dns.question.top_level_domain fields. {pull}14578[14578] -- Add support for mongodb opcode 2013 (OP_MSG). {issue}6191[6191] {pull}8594[8594] -- NFSv4: Always use opname `ILLEGAL` when failed to match request to a valid nfs operation. {pull}11503[11503] -- Added redact_headers configuration option, to allow HTTP request headers to be redacted whilst keeping the header field included in the beat. {pull}15353[15353] -- TLS: Fields have been changed to adapt to ECS. {pull}15497[15497] -- TLS: The behavior of send_certificates and include_raw_certificates options has changed. {pull}15497[15497] *Winlogbeat* *Functionbeat* -- Separate management and functions in Functionbeat. {pull}12939[12939] ==== Bugfixes *Affecting all Beats* -- Make the behavior of clientWorker and netClientWorker consistent when error is returned from publisher pipeline -- Fix a bug, publisher pipeline exits if output returns an error, irrespective of pipeline is closed or not -- Fix typo in TLS renegotiation configuration and setting the option correctly {issue}10871[10871], {pull}12354[12354] -- Ensure all beat commands respect configured settings. {pull}10721[10721] -- Add missing fields and test cases for libbeat add_kubernetes_metadata processor. {issue}11133[11133], {pull}11134[11134] -- decode_json_field: process objects and arrays only {pull}11312[11312] -- decode_json_field: do not process arrays when flag not set. {pull}11318[11318] -- Report faulting file when config reload fails. {pull}11304[11304] -- Fix a typo in libbeat/outputs/transport/client.go by updating `c.conn.LocalAddr()` to `c.conn.RemoteAddr()`. {pull}11242[11242] -- Management configuration backup file will now have a timestamps in their name. {pull}11034[11034] -- [CM] Parse enrollment_token response correctly {pull}11648[11648] -- Not hiding error in case of http failure using elastic fetcher {pull}11604[11604] -- Escape BOM on JsonReader before trying to decode line {pull}11661[11661] -- Fix matching of string arrays in contains condition. {pull}11691[11691] -- Replace wmi queries with win32 api calls as they were consuming CPU resources {issue}3249[3249] and {issue}11840[11840] -- Fix a race condition with the Kafka pipeline client, it is possible that `Close()` get called before `Connect()` . {issue}11945[11945] -- Fix queue.spool.write.flush.events config type. {pull}12080[12080] -- Fixed a memory leak when using the add_process_metadata processor under Windows. {pull}12100[12100] -- Fix of docker json parser for missing "log" jsonkey in docker container's log {issue}11464[11464] -- Fixed Beat ID being reported by GET / API. {pull}12180[12180] -- Fixed setting bulk max size in kafka output. {pull}12254[12254] -- Add host.os.codename to fields.yml. {pull}12261[12261] -- Fix `@timestamp` being duplicated in events if `@timestamp` is set in a - processor (or by any code utilizing `PutValue()` on a `beat.Event`). -- Fix leak in script processor when using Javascript functions in a processor chain. {pull}12600[12600] -- Add additional nil pointer checks to Docker client code to deal with vSphere Integrated Containers {pull}12628[12628] -- Fixed `json.add_error_key` property setting for delivering error messages from beat events {pull}11298[11298] -- Fix Central Management enroll under Windows {issue}12797[12797] {pull}12799[12799] -- ILM: Use GET instead of HEAD when checking for alias to expose detailed error message. {pull}12886[12886] -- Fix seccomp policy preventing some features to function properly on 32bit Linux systems. {issue}12990[12990] {pull}13008[13008] -- Fix unexpected stops on docker autodiscover when a container is restarted before `cleanup_timeout`. {issue}12962[12962] {pull}13127[13127] -- Fix install-service.ps1's ability to set Windows service's delay start configuration. {pull}13173[13173] -- Fix some incorrect types and formats in field.yml files. {pull}13188[13188] -- Load DLLs only from Windows system directory. {pull}13234[13234] {pull}13384[13384] -- Fix mapping for kubernetes.labels and kubernetes.annotations in add_kubernetes_metadata. {issue}12638[12638] {pull}13226[13226] -- Fix case insensitive regular expressions not working correctly. {pull}13250[13250] -- Disable `add_kubernetes_metadata` if no matchers found. {pull}13709[13709] -- Better wording for xpack beats when the _xpack endpoint is not reachable. {pull}13771[13771] -- Recover from panics in the javascript process and log details about the failure to aid in future debugging. {pull}13690[13690] -- Make the script processor concurrency-safe. {issue}13690[13690] {pull}13857[13857] -- Kubernetes watcher at `add_kubernetes_metadata` fails with StatefulSets {pull}13905[13905] -- Fix panics that could result from invalid TLS certificates. This can affect Beats that connect over - TLS or Beats that accept connections over TLS and validate client certificates. {pull}14146[14146] -- Support usage of custom builders without hints and mappers {pull}13839[13839] -- Fix memory leak in kubernetes autodiscover provider and add_kubernetes_metadata processor happening when pods are terminated without sending a delete event. {pull}14259[14259] -- Fix kubernetes `metaGenerator.ResourceMetadata` when parent reference controller is nil {issue}14320[14320] {pull}14329[14329] -- Allow users to configure only `cluster_uuid` setting under `monitoring` namespace. {pull}14338[14338] -- Fix `proxy_url` option in Elasticsearch output. {pull}14950[14950] -- Fix bug with potential concurrent reads and writes from event.Meta map by Kafka output. {issue}14542[14542] {pull}14568[14568] -- Fix spooling to disk blocking infinitely if the lock file can not be acquired. {pull}15338[15338] -- Fix `metricbeat test output` with an ipv6 ES host in the output.hosts. {pull}15368[15368] +TLS or Beats that accept connections over TLS and validate client certificates. {pull}14146[14146] +- Fix panic in the Logstash output when trying to send events to closed connection. {pull}15568[15568] *Auditbeat* -- Process dataset: Fixed a memory leak under Windows. {pull}12100[12100] -- Login dataset: Fix re-read of utmp files. {pull}12028[12028] -- Package dataset: Fixed a crash inside librpm after Auditbeat has been running for a while. {issue}12147[12147] {pull}12168[12168] -- Fix formatting of config files on macOS and Windows. {pull}12148[12148] -- Fix direction of incoming IPv6 sockets. {pull}12248[12248] -- Package dataset: Close librpm handle. {pull}12215[12215] -- Package dataset: Auto-detect package directories. {pull}12289[12289] -- Package dataset: Improve dpkg parsing. {pull}12325[12325] -- System module: Start system module without host ID. {pull}12373[12373] -- Host dataset: Fix reboot detection logic. {pull}12591[12591] -- Add syscalls used by librpm for the system/package dataset to the default Auditbeat seccomp policy. {issue}12578[12578] {pull}12617[12617] -- Process dataset: Do not show non-root warning on Windows. {pull}12740[12740] -- Host dataset: Export Host fields to gob encoder. {pull}12940[12940] -- Socket dataset: Fix start errors when IPv6 is disabled on the kernel. {issue}13953[13953] {pull}13966[13966] -- Removed GUID index pattern reference from Auditbeat dashboard definition. {pull}15314[15314] *Filebeat* -- Add support for Cisco syslog format used by their switch. {pull}10760[10760] -- Cover empty request data, url and version in Apache2 module{pull}10730[10730] -- Fix registry entries not being cleaned due to race conditions. {pull}10747[10747] -- Improve detection of file deletion on Windows. {pull}10747[10747] -- Add missing Kubernetes metadata fields to Filebeat CoreDNS module, and fix a documentation error. {pull}11591[11591] -- Reduce memory usage if long lines are truncated to fit `max_bytes` limit. The line buffer is copied into a smaller buffer now. This allows the runtime to release unused memory earlier. {pull}11524[11524] -- Fix memory leak in Filebeat pipeline acker. {pull}12063[12063] -- Fix goroutine leak caused on initialization failures of log input. {pull}12125[12125] -- Fix goroutine leak on non-explicit finalization of log input. {pull}12164[12164] -- Skipping unparsable log entries from docker json reader {pull}12268[12268] -- Parse timezone in PostgreSQL logs as part of the timestamp {pull}12338[12338] -- Load correct pipelines when system module is configured in modules.d. {pull}12340[12340] -- Fix timezone offset parsing in system/syslog. {pull}12529[12529] -- When TLS is configured for the TCP input and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584] -- Apply `max_message_size` to incoming message buffer. {pull}11966[11966] -- Syslog input will now omit the `process` object from events if it is empty. {pull}12700[12700] -- Fix multiline pattern in Postgres which was too permissive {issue}12078[12078] {pull}13069[13069] -- Allow path variables to be used in files loaded from modules.d. {issue}13184[13184] -- Fix filebeat autodiscover fileset hint for container input. {pull}13296[13296] -- Fix incorrect references to index patterns in AWS and CoreDNS dashboards. {pull}13303[13303] -- Fix timezone parsing of system module ingest pipelines. {pull}13308[13308] -- Fix timezone parsing of elasticsearch module ingest pipelines. {pull}13367[13367] -- Change iis url path grok pattern from URIPATH to NOTSPACE. {issue}12710[12710] {pull}13225[13225] {issue}7951[7951] {pull}13378[13378] {pull}14754[14754] -- Fix timezone parsing of nginx module ingest pipelines. {pull}13369[13369] -- Fix incorrect field references in envoyproxy dashboard {issue}13420[13420] {pull}13421[13421] -- Fixed early expiration of templates (Netflow v9 and IPFIX). {pull}13821[13821] -- Fixed bad handling of sequence numbers when multiple observation domains were exported by a single device (Netflow V9 and IPFIX). {pull}13821[13821] -- Fix timezone parsing of rabbitmq module ingest pipelines. {pull}13879[13879] -- Fix conditions and error checking of date processors in ingest pipelines that use `event.timezone` to parse dates. {pull}13883[13883] -- Fix timezone parsing of Cisco module ingest pipelines. {pull}13893[13893] -- Fix timezone parsing of logstash module ingest pipelines. {pull}13890[13890] -- cisco asa and ftd filesets: Fix parsing of message 106001. {issue}13891[13891] {pull}13903[13903] -- Fix timezone parsing of iptables, mssql and panw module ingest pipelines. {pull}13926[13926] -- Fix merging of fields specified in global scope with fields specified under an input's scope. {issue}3628[3628] {pull}13909[13909] -- Fix delay in enforcing close_renamed and close_removed options. {issue}13488[13488] {pull}13907[13907] -- Fix missing netflow fields in index template. {issue}13768[13768] {pull}13914[13914] -- Fix cisco module's asa and ftd filesets parsing of domain names where an IP address is expected. {issue}14034[14034] -- Fixed increased memory usage with large files when multiline pattern does not match. {issue}14068[14068] -- panw module: Use geo.name instead of geo.country_iso_code for free-form location. {issue}13272[13272] -- Fix azure fields names. {pull}14098[14098] -- Fix calculation of `network.bytes` and `network.packets` for bi-directional netflow events. {pull}14111[14111] -- Accept '-' as http.response.body.bytes in apache module. {pull}14137[14137] -- Fix timezone parsing of MySQL module ingest pipelines. {pull}14130[14130] -- Fix azure filesets test files. {issue}14185[14185] {pull}14235[14235] -- Improve error message in s3 input when handleSQSMessage failed. {pull}14113[14113] -- Close chan of Closer first before calling callback {pull}14231[14231] -- Fix race condition in S3 input plugin. {pull}14359[14359] -- Decode hex values in auditd module. {pull}14471[14471] -- Fix parse of remote addresses that are not IPs in nginx logs. {pull}14505[14505] -- Fix handling multiline log entries in nginx module. {issue}14349[14349] {pull}14499[14499] -- Fix parsing of Elasticsearch node name by `elasticsearch/slowlog` fileset. {pull}14547[14547] -- cisco/asa fileset: Fix parsing of 302021 message code. {pull}14519[14519] -- Fix filebeat azure dashboards, event category should be `Alert`. {pull}14668[14668] -- Update Logstash module's Grok patterns to support Logstash 7.4 logs. {pull}14743[14743] -- Fix a problem in Filebeat input httpjson where interval is not used as time.Duration. {issue}14752[14752] {pull}14753[14753] -- Fix SSL config in input.yml for Filebeat httpjson input in the MISP module. {pull}14767[14767] -- Check content-type when creating new reader in s3 input. {pull}15252[15252] {issue}15225[15225] -- Fix session reset detection and a crash in Netflow input. {pull}14904[14904] *Heartbeat* -- Fix NPEs / resource leaks when executing config checks. {pull}11165[11165] -- Fix duplicated IPs on `mode: all` monitors. {pull}12458[12458] -- Fix integer comparison on JSON responses. {pull}13348[13348] -- Fix storage of HTTP bodies to work when JSON/Regex body checks are enabled. {pull}14223[14223] -- Fix recording of SSL cert metadata for Expired/Unvalidated x509 certs. {pull}13687[13687] -- The heartbeat scheduler no longer drops scheduled items when under very high load causing missed deadlines. {pull}14890[14890] *Journalbeat* -- Use backoff when no new events are found. {pull}11861[11861] -- Iterate over journal correctly, so no duplicate entries are sent. {pull}12716[12716] -- Preserve host name when reading from remote journal. {pull}12714[12714] *Metricbeat* -- Change diskio metrics retrieval method (only for Windows) from wmi query to DeviceIOControl function using the IOCTL_DISK_PERFORMANCE control code {pull}11635[11635] -- Call GetMetricData api per region instead of per instance. {issue}11820[11820] {pull}11882[11882] -- Update documentation with cloudwatch:ListMetrics permission. {pull}11987[11987] -- Check permissions in system socket metricset based on capabilities. {pull}12039[12039] -- Get process information from sockets owned by current user when system socket metricset is run without privileges. {pull}12039[12039] -- Avoid generating hints-based configuration with empty hosts when no exposed port is suitable for the hosts hint. {issue}8264[8264] {pull}12086[12086] -- Fixed a socket leak in the postgresql module under Windows when SSL is disabled on the server. {pull}11393[11393] -- Change some field type from scaled_float to long in aws module. {pull}11982[11982] -- Fixed RabbitMQ `queue` metricset gathering when `consumer_utilisation` is set empty at the metrics source {pull}12089[12089] -- Fix direction of incoming IPv6 sockets. {pull}12248[12248] -- Refactored Windows perfmon metricset: replaced method to retrieve counter paths with PdhExpandWildCardPathW, separated code by responsibility, removed unused functions {pull}12212[12212] -- Validate that kibana/status metricset cannot be used when xpack is enabled. {pull}12264[12264] -- Ignore prometheus metrics when their values are NaN or Inf. {pull}12084[12084] {issue}10849[10849] -- In the kibana/stats metricset, only log error (don't also index it) if xpack is enabled. {pull}12265[12265] -- Fix an issue listing all processes when run under Windows as a non-privileged user. {issue}12301[12301] {pull}12475[12475] -- The `elasticsearch/index_summary` metricset gracefully handles an empty Elasticsearch cluster when `xpack.enabled: true` is set. {pull}12489[12489] {issue}12487[12487] -- When TLS is configured for the http metricset and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584] -- Reuse connections in PostgreSQL metricsets. {issue}12504[12504] {pull}12603[12603] -- PdhExpandWildCardPathW will not expand counter paths in 32 bit windows systems, workaround will use a different function. {issue}12590[12590] {pull}12622[12622] -- In the elasticsearch/node_stats metricset, if xpack is enabled, make parsing of ES node load average optional as ES on Windows doesn't report load average. {pull}12866[12866] -- Ramdisk is not filtered out when collecting disk performance counters in diskio metricset {issue}12814[12814] {pull}12829[12829] -- Fix incoherent behaviour in redis key metricset when keyspace is specified both in host URL and key pattern {pull}12913[12913] -- Fix connections leak in redis module {pull}12914[12914] {pull}12950[12950] -- Fix wrong uptime reporting by system/uptime metricset under Windows. {pull}12915[12915] -- Print errors that were being omitted in vSphere metricsets. {pull}12816[12816] -- Fix redis key metricset dashboard references to index pattern. {pull}13303[13303] -- Check if fields in DBInstance is nil in rds metricset. {pull}13294[13294] {issue}13037[13037] -- Fix silent failures in kafka and prometheus module. {pull}13353[13353] {issue}13252[13252] -- Fix issue with aws cloudwatch module where dimensions and/or namespaces that contain space are not being parsed correctly {pull}13389[13389] -- Fix panic in Redis Key metricset when collecting information from a removed key. {pull}13426[13426] -- Fix module-level fields in Kubernetes metricsets. {pull}13433[13433] {pull}13544[13544] -- Fix reporting empty events in cloudwatch metricset. {pull}13458[13458] -- Fix `docker.cpu.system.pct` calculation by using the reported number online cpus instead of the number of metrics per cpu. {pull}13691[13691] -- Fix rds metricset dashboard. {pull}13721[13721] -- Ignore prometheus untyped metrics with NaN value. {issue}13750[13750] {pull}13790[13790] -- Change kubernetes.event.message to text. {pull}13964[13964] -- Fix performance counter values for windows/perfmon metricset. {issue}14036[14036] {pull}14039[14039] -- Add FailOnRequired when applying schema and fix metric names in mongodb metrics metricset. {pull}14143[14143] -- Change `server_status_path` default setting for nginx module {issue}13806[13806] {pull}14099[14099] -- Convert increments of 100 nanoseconds/ticks to milliseconds for WriteTime and ReadTime in diskio metricset (Windows) for consistency. {issue}14233[14233] -- Limit some of the error messages to the logs only {issue}14317[14317] {pull}14327[14327] -- Convert indexed ms-since-epoch timestamp fields in `elasticsearch/ml_job` metricset to ints from float64s. {issue}14220[14220] {pull}14222[14222] -- Fix ARN parsing function to work for ELB ARNs. {pull}14316[14316] -- Update azure configuration example. {issue}14224[14224] -- Fix cloudwatch metricset with names and dimensions in config. {issue}14376[14376] {pull}14391[14391] -- Fix marshaling of ms-since-epoch values in `elasticsearch/cluster_stats` metricset. {pull}14378[14378] -- Fix checking tagsFilter using length in cloudwatch metricset. {pull}14525[14525] -- Log bulk failures from bulk API requests to monitoring cluster. {issue}14303[14303] {pull}14356[14356] -- Fixed bug with `elasticsearch/cluster_stats` metricset not recording license expiration date correctly. {issue}14541[14541] {pull}14591[14591] -- Fix regular expression to detect instance name in perfmon metricset. {issue}14273[14273] {pull}14666[14666] -- Vshpere module splits `virtualmachine.host` into `virtualmachine.host.id` and `virtualmachine.host.hostname`. {issue}7187[7187] {pull}7213[7213] -- Fixed bug with `elasticsearch/cluster_stats` metricset not recording license ID in the correct field. {pull}14592[14592] -- Fix perfmon expanding counter path/adding counter to query when OS language is not english. {issue}14684[14684] {pull}14800[14800] -- Add extra check on `ignore_non_existent_counters` flag if the PdhExpandWildCardPathW returns no errors but does not expand the counter path successfully in windows/perfmon metricset. {pull}14797[14797] -- Fix rds metricset from reporting same values for different instances. {pull}14702[14702] -- Closing handler after verifying the registry key in diskio metricset. {issue}14683[14683] {pull}14759[14759] -- Fix docker network stats when multiple interfaces are configured. {issue}14586[14586] {pull}14825[14825] -- Fix ListMetrics pagination in aws module. {issue}14926[14926] {pull}14942[14942] -- Fix CPU count in docker/cpu in cases where no `online_cpus` are reported {pull}15070[15070] -- Fix mixed modules loading standard and light metricsets {pull}15011[15011] -- Fix `docker.container.size` fields values {issue}14979[14979] {pull}15224[15224] -- Make `kibana` module more resilient to Kibana unavailability. {issue}15258[15258] {pull}15270[15270] -- Fix panic exception with some unicode strings in perfmon metricset. {issue}15264[15264] -- Make `logstash` module more resilient to Logstash unavailability. {issue}15276[15276] {pull}15306[15306] -- Add username/password in Metricbeat autodiscover hints {pull}15349[15349] *Packetbeat* -- Prevent duplicate packet loss error messages in HTTP events. {pull}10709[10709] -- Fixed a memory leak when using process monitoring under Windows. {pull}12100[12100] -- Improved debug logging efficiency in PGQSL module. {issue}12150[12150] -- Limit memory usage of Redis replication sessions. {issue}12657[12657] -- Fix parsing the extended RCODE in the DNS parser. {pull}12805[12805] -- Fix parsing of the HTTP host header when it contains a port or an IPv6 address. {pull}14215[14215] *Winlogbeat* -- Fix data race affecting config validation at startup. {issue}13005[13005] -- Set host.name to computername in Windows event logs & sysmon. Requires {pull}14407[14407] in libbeat to work {issue}13706[13706] *Functionbeat* -- Fix function name reference for Kinesis streams in CloudFormation templates {pull}11646[11646] -- Fix Cloudwatch logs timestamp to use timestamp of the log record instead of when the record was processed {pull}13291[13291] -- Look for the keystore under the correct path. {pull}13332[13332] ==== Added *Affecting all Beats* -- Add a friendly log message when a request to docker has exceeded the deadline. {pull}15336[15336] -- Decouple Debug logging from fail_on_error logic for rename, copy, truncate processors {pull}12451[12451] -- Add an option to append to existing logs rather than always rotate on start. {pull}11953[11953] -- Add `network` condition to processors for matching IP addresses against CIDRs. {pull}10743[10743] -- Add if/then/else support to processors. {pull}10744[10744] -- Add `community_id` processor for computing network flow hashes. {pull}10745[10745] -- Add output test to kafka output {pull}10834[10834] -- Gracefully shut down on SIGHUP {pull}10704[10704] -- New processor: `copy_fields`. {pull}11303[11303] -- Add `error.message` to events when `fail_on_error` is set in `rename` and `copy_fields` processors. {pull}11303[11303] -- New processor: `truncate_fields`. {pull}11297[11297] -- Allow a beat to ship monitoring data directly to an Elasticsearch monitoring cluster. {pull}9260[9260] -- Updated go-seccomp-bpf library to v1.1.0 which updates syscall lists for Linux v5.0. {pull}11394[11394] -- Add `add_observer_metadata` processor. {pull}11394[11394] -- Add `decode_csv_fields` processor. {pull}11753[11753] -- Add `convert` processor for converting data types of fields. {issue}8124[8124] {pull}11686[11686] -- New `extract_array` processor. {pull}11761[11761] -- Add number of goroutines to reported metrics. {pull}12135[12135] -- Add `proxy_disable` output flag to explicitly ignore proxy environment variables. {issue}11713[11713] {pull}12243[12243] -- Processor `add_cloud_metadata` adds fields `cloud.account.id` and `cloud.image.id` for AWS EC2. {pull}12307[12307] -- Add configurable bulk_flush_frequency in kafka output. {pull}12254[12254] -- Add `decode_base64_field` processor for decoding base64 field. {pull}11914[11914] -- Add support for reading the `network.iana_number` field by default to the community_id processor. {pull}12701[12701] -- Add aws overview dashboard. {issue}11007[11007] {pull}12175[12175] -- Add `decompress_gzip_field` processor. {pull}12733[12733] -- Add `timestamp` processor for parsing time fields. {pull}12699[12699] -- Fail with error when autodiscover providers have no defined configs. {pull}13078[13078] -- Add a check so alias creation explicitely fails if there is an index with the same name. {pull}13070[13070] -- Update kubernetes watcher to use official client-go libraries. {pull}13051[13051] -- Add support for unix epoch time values in the `timestamp` processor. {pull}13319[13319] -- add_host_metadata is now GA. {pull}13148[13148] -- Add an `ignore_missing` configuration option the `drop_fields` processor. {pull}13318[13318] -- add_host_metadata is no GA. {pull}13148[13148] -- Add `registered_domain` processor for deriving the registered domain from a given FQDN. {pull}13326[13326] -- Add support for RFC3339 time zone offsets in JSON output. {pull}13227[13227] -- Add autodetection mode for add_docker_metadata and enable it by default in included configuration files{pull}13374[13374] -- Added `monitoring.cluster_uuid` setting to associate Beat data with specified ES cluster in Stack Monitoring UI. {pull}13182[13182] -- Add autodetection mode for add_kubernetes_metadata and enable it by default in included configuration files. {pull}13473[13473] -- Add `providers` setting to `add_cloud_metadata` processor. {pull}13812[13812] -- Use less restrictive API to check if template exists. {pull}13847[13847] -- Do not check for alias when setup.ilm.check_exists is false. {pull}13848[13848] -- Add support for numeric time zone offsets in timestamp processor. {pull}13902[13902] -- Add condition to the config file template for add_kubernetes_metadata {pull}14056[14056] -- Marking Central Management deprecated. {pull}14018[14018] -- Add `keep_null` setting to allow Beats to publish null values in events. {issue}5522[5522] {pull}13928[13928] -- Add shared_credential_file option in aws related config for specifying credential file directory. {issue}14157[14157] {pull}14178[14178] -- GA the `script` processor. {pull}14325[14325] -- Add `fingerprint` processor. {issue}11173[11173] {pull}14205[14205] -- Add support for API keys in Elasticsearch outputs. {pull}14324[14324] -- Ensure that init containers are no longer tailed after they stop {pull}14394[14394] -- Add consumer_lag in Kafka consumergroup metricset {pull}14822[14822] -- Make use of consumer_lag in Kafka dashboard {pull}14863[14863] -- Refactor kubernetes autodiscover to enable different resource based discovery {pull}14738[14738] -- Add `add_id` processor. {pull}14524[14524] -- Enable TLS 1.3 in all beats. {pull}12973[12973] -- Enable DEP (Data Execution Protection) for Windows packages. {pull}15149[15149] -- Spooling to disk creates a lockfile on each platform. {pull}15338[15338] -- Fingerprint processor adds a new xxhash hashing algorithm {pull}15418[15418] *Auditbeat* -- Auditd module: Add `event.outcome` and `event.type` for ECS. {pull}11432[11432] -- Process: Add file hash of process executable. {pull}11722[11722] -- Socket: Add network.transport and network.community_id. {pull}12231[12231] -- Host: Fill top-level host fields. {pull}12259[12259] -- Socket: Add DNS enrichment. {pull}14004[14004] *Filebeat* -- Add more info to message logged when a duplicated symlink file is found {pull}10845[10845] -- Add option to configure docker input with paths {pull}10687[10687] -- Add Netflow module to enrich flow events with geoip data. {pull}10877[10877] -- Set `event.category: network_traffic` for Suricata. {pull}10882[10882] -- Allow custom default settings with autodiscover (for example, use of CRI paths for logs). {pull}12193[12193] -- Allow to disable hints based autodiscover default behavior (fetching all logs). {pull}12193[12193] -- Change Suricata module pipeline to handle `destination.domain` being set if a reverse DNS processor is used. {issue}10510[10510] -- Add the `network.community_id` flow identifier to field to the IPTables, Suricata, and Zeek modules. {pull}11005[11005] -- New Filebeat coredns module to ingest coredns logs. It supports both native coredns deployment and coredns deployment in kubernetes. {pull}11200[11200] -- New module for Cisco ASA logs. {issue}9200[9200] {pull}11171[11171] -- Added support for Cisco ASA fields to the netflow input. {pull}11201[11201] -- Configurable line terminator. {pull}11015[11015] -- Add Filebeat envoyproxy module. {pull}11700[11700] -- Add apache2(httpd) log path (`/var/log/httpd`) to make apache2 module work out of the box on Redhat-family OSes. {issue}11887[11887] {pull}11888[11888] -- Add support to new MongoDB additional diagnostic information {pull}11952[11952] -- New module `panw` for Palo Alto Networks PAN-OS logs. {pull}11999[11999] -- Add RabbitMQ module. {pull}12032[12032] -- Add new `container` input. {pull}12162[12162] -- Add timeouts on communication with docker daemon. {pull}12310[12310] -- `container` and `docker` inputs now support reading of labels and env vars written by docker JSON file logging driver. {issue}8358[8358] -- Add specific date processor to convert timezones so same pipeline can be used when convert_timezone is enabled or disabled. {pull}12253[12253] -- Add MSSQL module {pull}12079[12079] -- Add ISO8601 date parsing support for system module. {pull}12568[12568] {pull}12578[12579] -- Update Kubernetes deployment manifest to use `container` input. {pull}12632[12632] -- Use correct OS path separator in `add_kubernetes_metadata` to support Windows nodes. {pull}9205[9205] -- Add support for virtual host in Apache access logs {pull}12778[12778] -- Add support for client addresses with port in Apache error logs {pull}12695[12695] -- Add `google-pubsub` input type for consuming messages from a Google Cloud Pub/Sub topic subscription. {pull}12746[12746] -- Add module for ingesting Cisco IOS logs over syslog. {pull}12748[12748] -- Add module for ingesting Google Cloud VPC flow logs. {pull}12747[12747] -- Report host metadata for Filebeat logs in Kubernetes. {pull}12790[12790] -- Add netflow dashboards based on Logstash netflow. {pull}12857[12857] -- Parse more fields from Elasticsearch slowlogs. {pull}11939[11939] -- Update module pipelines to enrich events with autonomous system fields. {pull}13036[13036] -- Add module for ingesting IBM MQ logs. {pull}8782[8782] -- Add S3 input to retrieve logs from AWS S3 buckets. {pull}12640[12640] {issue}12582[12582] -- Add aws module s3access metricset. {pull}13170[13170] {issue}12880[12880] -- Update Suricata module to populate ECS DNS fields and handle EVE DNS version 2. {issue}13320[13320] {pull}13329[13329] -- Update PAN-OS fileset to use the ECS NAT fields. {issue}13320[13320] {pull}13330[13330] -- Add fields to the Zeek DNS fileset for ECS DNS. {issue}13320[13320] {pull}13324[13324] -- Add container image in Kubernetes metadata {pull}13356[13356] {issue}12688[12688] -- Add timezone information to apache error fileset. {issue}12772[12772] {pull}13304[13304] -- Add module for ingesting Cisco FTD logs over syslog. {pull}13286[13286] -- Update CoreDNS module to populate ECS DNS fields. {issue}13320[13320] {pull}13505[13505] -- Parse query steps in PostgreSQL slowlogs. {issue}13496[13496] {pull}13701[13701] -- Add filebeat azure module with activitylogs, auditlogs, signinlogs filesets. {pull}13776[13776] {pull}14033[14033] -- Add support to set the document id in the json reader. {pull}5844[5844] -- Add input httpjson. {issue}13545[13545] {pull}13546[13546] -- Filebeat Netflow input: Remove beta label. {pull}13858[13858] -- Remove `event.timezone` from events that don't need it in some modules that support log formats with and without timezones. {pull}13918[13918] -- Add ExpandEventListFromField config option in the kafka input. {pull}13965[13965] -- Add ELB fileset to AWS module. {pull}14020[14020] -- Add module for MISP (Malware Information Sharing Platform). {pull}13805[13805] -- Add `source.bytes` and `source.packets` for uni-directional netflow events. {pull}14111[14111] -- Add support for gzipped files in S3 input. {pull}13980[13980] -- Add support for all the ObjectCreated events in S3 input. {pull}14077[14077] -- Add Kibana Dashboard for MISP module. {pull}14147[14147] -- Add JSON options to autodiscover hints {pull}14208[14208] -- Add more filesets to Zeek module. {pull}14150[14150] -- Add `index` option to all inputs to directly set a per-input index value. {pull}14010[14010] -- Remove beta flag for some filebeat modules. {pull}14374[14374] -- Add support for http hostname in nginx filebeat module. {pull}14505[14505] -- Add attack_pattern_kql field to MISP threat indicators. {pull}14470[14470] -- Add fileset to the Zeek module for the intel.log. {pull}14404[14404] -- Add vpc flow log fileset to AWS module. {issue}13880[13880] {pull}14345[14345] -- New fileset googlecloud/firewall for ingesting Google Cloud Firewall logs. {pull}14553[14553] -- Add document for Filebeat input httpjson. {pull}14602[14602] -- Add more configuration options to the Netflow module. {pull}14628{14628} -- Add dashboards to the CEF module (ported from the Logstash ArcSight module). -- Add dashboards to the CEF module (ported from the Logstash ArcSight module). {pull}14342[14342] -- Fix timezone parsing in haproxy pipeline. {pull}14755[14755] -- Add module for ActiveMQ. {pull}14840[14840] -- Add dashboards for the ActiveMQ Filebeat module. {pull}14880[14880] -- Add STAN Metricbeat module. {pull}14839[14839] -- Add new fileset googlecloud/audit for ingesting Google Cloud Audit logs. {pull}15200[15200] -- Add expand_event_list_from_field support in s3 input for reading json format AWS logs. {issue}15357[15357] {pull}15370[15370] -- Add azure-eventhub input which will use the azure eventhub go sdk. {issue}14092[14092] {pull}14882[14882] -- Expose more metrics of harvesters (e.g. `read_offset`, `start_time`). {pull}13395[13395] -- Integrate the azure-eventhub with filebeat azure module (replace the kafka input). {pull}15480[15480] -- Include log.source.address for unparseable syslog messages. {issue}13268[13268] {pull}15453[15453] -- Release aws elb fileset as GA. {pull}15426[15426] {issue}15380[15380] -- Release aws s3access fileset to GA. {pull}15431[15431] {issue}15430[15430] -- Add cloudtrail fileset to AWS module. {issue}14657[14657] {pull}15227[15227] *Heartbeat* -- Add non-privileged icmp on linux and darwin(mac). {pull}13795[13795] {issue}11498[11498] -- Enable `add_observer_metadata` processor in default config. {pull}11394[11394] -- Record HTTP body metadata and optionally contents in `http.response.body.*` fields. {pull}13022[13022] -- Add `monitor.timespan` field for optimized queries in kibana. {pull}13672[13672] -- Allow `hosts` to be used to configure http monitors {pull}13703[13703] -- google-pubsub input: ACK pub/sub message when acknowledged by publisher. {issue}13346[13346] {pull}14715[14715] -- Remove Beta label from google-pubsub input. {issue}13346[13346] {pull}14715[14715] +- Allow a list of status codes for HTTP checks. {pull}15587[15587] + *Journalbeat* -- Add `index` option to all inputs to directly set a per-input index value. {issue}15063[15063] {pull}15071[15071] *Metricbeat* -- Add AWS SQS metricset. {pull}10684[10684] {issue}10053[10053] -- Add AWS s3_request metricset. {pull}10949[10949] {issue}10055[10055] -- Add s3_daily_storage metricset. {pull}10940[10940] {issue}10055[10055] -- Add `coredns` metricbeat module. {pull}10585[10585] -- Add SSL support for Metricbeat HTTP server. {pull}11482[11482] {issue}11457[11457] -- The `elasticsearch.index` metricset (with `xpack.enabled: true`) now collects `refresh.external_total_time_in_millis` fields from Elasticsearch. {pull}11616[11616] -- Allow module configurations to have variants {pull}9118[9118] -- Add `timeseries.instance` field calculation. {pull}10293[10293] -- Added new disk states and raid level to the system/raid metricset. {pull}11613[11613] -- Added `path_name` and `start_name` to service metricset on windows module {issue}8364[8364] {pull}11877[11877] -- Add check on object name in the counter path if the instance name is missing {issue}6528[6528] {pull}11878[11878] -- Add AWS cloudwatch metricset. {pull}11798[11798] {issue}11734[11734] -- Add `regions` in aws module config to specify target regions for querying cloudwatch metrics. {issue}11932[11932] {pull}11956[11956] -- Keep `etcd` followers members from reporting `leader` metricset events {pull}12004[12004] -- Add overview dashboard to Consul module {pull}10665[10665] -- New fields were added in the mysql/status metricset. {pull}12227[12227] -- Add Kubernetes metricset `proxy`. {pull}12312[12312] -- Add Kubernetes proxy dashboard to Kubernetes module {pull}12734[12734] -- Always report Pod UID in the `pod` metricset. {pull}12345[12345] -- Add Vsphere Virtual Machine operating system to `os` field in Vsphere virtualmachine module. {pull}12391[12391] -- Add validation for elasticsearch and kibana modules' metricsets when xpack.enabled is set to true. {pull}12386[12386] -- Add CockroachDB module. {pull}12467[12467] -- Add support for metricbeat modules based on existing modules (a.k.a. light modules) {issue}12270[12270] {pull}12465[12465] -- Add a system/entropy metricset {pull}12450[12450] -- Add kubernetes metricset `controllermanager` {pull}12409[12409] -- Add Kubernetes controller manager dashboard to Kubernetes module {pull}12744[12744] -- Allow redis URL format in redis hosts config. {pull}12408[12408] -- Add tags into ec2 metricset. {issue}[12263]12263 {pull}12372[12372] -- Add metrics to kubernetes apiserver metricset. {pull}12922[12922] -- Add kubernetes metricset `scheduler` {pull}12521[12521] -- Add Kubernetes scheduler dashboard to Kubernetes module {pull}12749[12749] -- Add `beat` module. {pull}12181[12181] {pull}12615[12615] -- Collect tags for cloudwatch metricset in aws module. {issue}[12263]12263 {pull}12480[12480] -- Add AWS RDS metricset. {pull}11620[11620] {issue}10054[10054] -- Add Oracle Module {pull}11890[11890] -- Add Oracle Tablespaces Dashboard {pull}12736[12736] -- Collect client provided name for rabbitmq connection. {issue}12851[12851] {pull}12852[12852] -- Add support to load default aws config file to get credentials. {pull}12727[12727] {issue}12708[12708] -- Add statistic option into cloudwatch metricset. {issue}12370[12370] {pull}12840[12840] -- Add support for kubernetes cronjobs {pull}13001[13001] -- Add cgroup memory stats to docker/memory metricset {pull}12916[12916] -- Add AWS elb metricset. {pull}12952[12952] {issue}11701[11701] -- Add AWS ebs metricset. {pull}13167[13167] {issue}11699[11699] -- Add `metricset.period` field with the configured fetching period. {pull}13242[13242] {issue}12616[12616] -- Add rate metrics for ec2 metricset. {pull}13203[13203] -- Add refresh list of perf counters at every fetch {issue}13091[13091] -- Add Performance metricset to Oracle module {pull}12547[12547] -- Add proc/vmstat data to the system/memory metricset on linux {pull}13322[13322] -- Use DefaultMetaGeneratorConfig in MetadataEnrichers to initialize configurations {pull}13414[13414] -- Add module for statsd. {pull}13109[13109] -- Add support for NATS version 2. {pull}13601[13601] -- Add `docker.cpu.*.norm.pct` metrics for `cpu` metricset of Docker Metricbeat module. {pull}13695[13695] -- Add `instance` label by default when using Prometheus collector. {pull}13737[13737] -- Add azure module. {pull}13196[13196] {pull}13859[13859] {pull}13988[13988] -- Add Apache Tomcat module {pull}13491[13491] -- Add ECS `container.id` and `container.runtime` to kubernetes `state_container` metricset. {pull}13884[13884] -- Add `job` label by default when using Prometheus collector. {pull}13878[13878] -- Add `state_resourcequota` metricset for Kubernetes module. {pull}13693[13693] -- Add tags filter in ec2 metricset. {pull}13872[13872] {issue}13145[13145] -- Add cloud.account.id and cloud.account.name into events from aws module. {issue}13551[13551] {pull}13558[13558] -- Add `metrics_path` as known hint for autodiscovery {pull}13996[13996] -- Leverage KUBECONFIG when creating k8s client. {pull}13916[13916] -- Add ability to filter by tags for cloudwatch metricset. {pull}13758[13758] {issue}13145[13145] -- Release cloudwatch, s3_daily_storage, s3_request, sqs and rds metricset as GA. {pull}14114[14114] {issue}14059[14059] -- Add Oracle overview dashboard {pull}14021[14021] -- Release CoreDNS module as GA. {pull}14308[14308] -- Release CouchDB module as GA. {pull}14300[14300] -- Add `elasticsearch/enrich` metricset. {pull}14243[14243] {issue}14221[14221] -- Add support for Application ELB and Network ELB. {pull}14123[14123] {issue}13538[13538] {issue}13539[13539] -- Release aws ebs metricset as GA. {pull}14312[14312] {issue}14060[14060] -- Add `connection.state` field for RabbitMQ module. {pull}13981[13981] -- Add more TCP states to Metricbeat system socket_summary. {pull}14347[14347] -- Add Kafka JMX metricsets. {pull}14330[14330] -- Add metrics to envoyproxy server metricset and support for envoy proxy 1.12. {pull}14416[14416] {issue}13642[13642] -- Release kubernetes modules `controllermanager`, `scheduler`, `proxy`, `state_cronjob` and `state_resourcequota` as GA. {pull}14584[14584] -- Add module for ActiveMQ. {pull}14580[14580] -- Enable script processor. {pull}14711[14711] -- Enable wildcard for cloudwatch metricset namespace. {pull}14971[14971] {issue}14965[14965] -- Add `kube-state-metrics` `state_service` metrics for kubernetes module. {pull}14794[14794] -- Add `kube-state-metrics` `state_persistentvolume` metrics for kubernetes module. {pull}14859[14859] -- Add `kube-state-metrics` `state_persistentvolumeclaim` metrics for kubernetes module. {pull}15066[15066] -- Add usage metricset in aws modules. {pull}14925[14925] {issue}14935[14935] -- Add billing metricset in aws modules. {pull}14801[14801] {issue}14934[14934] -- Add AWS SNS metricset. {pull}14946[14946] -- Add overview dashboard for AWS SNS module {pull}14977[14977] -- Add `index` option to all modules to specify a module-specific output index. {pull}15100[15100] -- Add a `system/service` metricset for systemd data. {pull}14206[14206] +- Move the windows pdh implementation from perfmon to a shared location in order for future modules/metricsets to make use of. {pull}15503[15503] +- Add lambda metricset in aws module. {pull}15260[15260] - Expand data for the `system/memory` metricset {pull}15492[15492] - Add azure `storage` metricset in order to retrieve metric values for storage accounts. {issue}14548[14548] {pull}15342[15342] - Add cost warnings for the azure module. {pull}15356[15356] - Add DynamoDB AWS Metricbeat light module {pull}15097[15097] - Release elb module as GA. {pull}15485[15485] - Add a `system/network_summary` metricset {pull}15196[15196] +- Add mesh metricset for Istio Metricbeat module{pull}15535[15535] +- Make the `system/cpu` metricset collect normalized CPU metrics by default. {issue}15618[15618] {pull}15729[15729] *Packetbeat* -- Update DNS protocol plugin to produce events with ECS fields for DNS. {issue}13320[13320] {pull}13354[13354] - *Functionbeat* -- New options to configure roles and VPC. {pull}11779[11779] -- Export automation templates used to create functions. {pull}11923[11923] -- Configurable Amazon endpoint. {pull}12369[12369] -- Add timeout option to reference configuration. {pull}13351[13351] -- Configurable tags for Lambda functions. {pull}13352[13352] -- Add input for Cloudwatch logs through Kinesis. {pull}13317[13317] -- Enable Logstash output. {pull}13345[13345] -- Make `bulk_max_size` configurable in outputs. {pull}13493[13493] -- Add `index` option to all functions to directly set a per-function index value. {issue}15064[15064] {pull}15101[15101] -- Add monitoring info about triggered functions. {pull}14876[14876] -- Add Google Cloud Platform support. {pull}13598[13598] *Winlogbeat* -- Add support for reading from .evtx files. {issue}4450[4450] -- Add support for event ID 4634 and 4647 to the Security module. {pull}12906[12906] -- Add `network.community_id` to Sysmon network events (event ID 3). {pull}13034[13034] -- Add `event.module` to Winlogbeat modules. {pull}13047[13047] -- Add `event.category: process` and `event.type: process_start/process_end` to Sysmon process events (event ID 1 and 5). {pull}13047[13047] -- Add support for event ID 4672 to the Security module. {pull}12975[12975] -- Add support for event ID 22 (DNS query) to the Sysmon module. {pull}12960[12960] -- Add certain winlog.event_data.* fields to the index template. {issue}13700[13700] {pull}13704[13704] -- Fill `event.provider`. {pull}13937[13937] -- Add support for user management events to the Security module. {pull}13530[13530] -- GA the Winlogbeat `sysmon` module. {pull}14326[14326] -- Add support for event ID 4688 & 4689 (Process create & exit) to the Security module. {issue}14038[14038] ==== Deprecated @@ -639,8 +110,6 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d *Filebeat* -- `docker` input is deprecated in favour `container`. {pull}12162[12162] -- `postgresql.log.timestamp` field is deprecated in favour of `@timestamp`. {pull}12338[12338] *Heartbeat* @@ -648,7 +117,6 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d *Metricbeat* -- `kubernetes.container.id` field for `state_container` is deprecated in favour of ECS `container.id` and `container.runtime`. {pull}13884[13884] *Packetbeat* diff --git a/Jenkinsfile b/Jenkinsfile index 89753eec3e25..3eb0d1ff7dda 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -1,11 +1,6 @@ #!/usr/bin/env groovy -library identifier: 'apm@master', -retriever: modernSCM( - [$class: 'GitSCMSource', - credentialsId: 'f94e9298-83ae-417e-ba91-85c279771570', - id: '37cf2c00-2cc7-482e-8c62-7bbffef475e2', - remote: 'git@github.com:elastic/apm-pipeline-library.git']) +@Library('apm@current') _ pipeline { agent { label 'ubuntu && immutable' } @@ -36,12 +31,7 @@ pipeline { stage('Checkout') { options { skipDefaultCheckout() } steps { - //TODO we need to configure the library in Jenkins to use privileged methods. - //gitCheckout(basedir: "${BASE_DIR}") - dir("${BASE_DIR}"){ - checkout scm - githubEnv() - } + gitCheckout(basedir: "${BASE_DIR}") stash allowEmpty: true, name: 'source', useDefaultExcludes: false script { env.GO_VERSION = readFile("${BASE_DIR}/.go-version").trim() diff --git a/NOTICE.txt b/NOTICE.txt index f61138aaf82d..df155af2a022 100644 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -1255,6 +1255,99 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------- +Dependency: github.com/eclipse/paho.mqtt.golang +Revision: 0d940dd29fd24f905cd16b28b1209b4977b97e1a +License type (autodetected): EPL-1.0 +./vendor/github.com/eclipse/paho.mqtt.golang/LICENSE: +-------------------------------------------------------------------- +Eclipse Public License - v 1.0 + +THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. + +1. DEFINITIONS + +"Contribution" means: + +a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and + +b) in the case of each subsequent Contributor: + +i) changes to the Program, and + +ii) additions to the Program; + +where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program. + +"Contributor" means any person or entity that distributes the Program. + +"Licensed Patents" mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. + +"Program" means the Contributions distributed in accordance with this Agreement. + +"Recipient" means anyone who receives the Program under this Agreement, including all Contributors. + +2. GRANT OF RIGHTS + +a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form. + +b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. + +c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. + +d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. + +3. REQUIREMENTS + +A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that: + +a) it complies with the terms and conditions of this Agreement; and + +b) its license agreement: + +i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; + +ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; + +iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and + +iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange. + +When the Program is made available in source code form: + +a) it must be made available under this Agreement; and + +b) a copy of this Agreement must be included with each copy of the Program. + +Contributors may not remove or alter any copyright notices contained within the Program. + +Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution. + +4. COMMERCIAL DISTRIBUTION + +Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. + +For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. + +5. NO WARRANTY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. + +6. DISCLAIMER OF LIABILITY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +7. GENERAL + +If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. + +If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. + +All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. + +Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. + +This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation. +-------------------------------------------------------------------- Dependency: github.com/elastic/ecs Version: v1.4.0 Revision: cc4b36eebec29975f57cd0475c3987c9bde5c15a @@ -2100,177 +2193,410 @@ NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -------------------------------------------------------------------- -Dependency: github.com/gofrs/flock -Revision: 5135e617513b1e6e205a3a89b042249dee6730c8 -License type (autodetected): BSD-3-Clause -./vendor/github.com/gofrs/flock/LICENSE: +Dependency: github.com/godror/godror +Version: v0.10.4 +Revision: 0123d49bd73e1bed106ac8b6af67f943fbbf06e2 +License type (autodetected): Apache-2.0 +./vendor/github.com/godror/godror/LICENSE.md: -------------------------------------------------------------------- -Copyright (c) 2015, Tim Heckman -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: +Apache License 2.0 -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. -* Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. +-------------------------------------------------------------------- +Dependency: github.com/godror/godror/odpi +License type (autodetected): UPL-1.0 +./vendor/github.com/godror/godror/odpi/LICENSE.md: +-------------------------------------------------------------------- +Copyright (c) 2016, 2018 Oracle and/or its affiliates. All rights reserved. -* Neither the name of linode-netint nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. +This program is free software: you can modify it and/or redistribute it under +the terms of: -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +(i) the Universal Permissive License v 1.0 or at your option, any + later version (); and/or --------------------------------------------------------------------- -Dependency: github.com/gofrs/uuid -Version: 3.1.1 -Revision: 47cd1dca1a6e7f807d5a492bd7e7f41d0855b5a1 -License type (autodetected): MIT -./vendor/github.com/gofrs/uuid/LICENSE: --------------------------------------------------------------------- -Copyright (C) 2013-2018 by Maxim Bublis +(ii) the Apache License v 2.0. () -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. +The Universal Permissive License (UPL), Version 1.0 +=================================================== -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +Subject to the condition set forth below, permission is hereby granted to any +person obtaining a copy of this software, associated documentation and/or data +(collectively the "Software"), free of charge and under any and all copyright +rights in the Software, and any and all patent rights owned or freely +licensable by each licensor hereunder covering either (i) the unmodified +Software as contributed to or provided by such licensor, or (ii) the Larger +Works (as defined below), to deal in both --------------------------------------------------------------------- -Dependency: github.com/gogo/protobuf -Revision: 4c00d2f19fb91be5fecd8681fa83450a2a979e69 -License type (autodetected): BSD-3-Clause -./vendor/github.com/gogo/protobuf/LICENSE: --------------------------------------------------------------------- -Copyright (c) 2013, The GoGo Authors. All rights reserved. +(a) the Software, and -Protocol Buffers for Go with Gadgets +(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if + one is included with the Software (each a "Larger Work" to which the + Software is contributed by such licensors), -Go support for Protocol Buffers - Google's data interchange format +without restriction, including without limitation the rights to copy, create +derivative works of, display, perform, and distribute the Software and make, +use, sell, offer for sale, import, export, have made, and have sold the +Software and the Larger Work(s), and to sublicense the foregoing rights on +either these or other terms. -Copyright 2010 The Go Authors. All rights reserved. -https://github.com/golang/protobuf +This license is subject to the following condition: -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: +The above copyright notice and either this complete permission notice or at a +minimum a reference to the UPL must be included in all copies or substantial +portions of the Software. - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +Apache License +============== --------------------------------------------------------------------- -Dependency: github.com/golang/glog -Revision: 23def4e6c14b4da8ac2ed8007337bc5eb5007998 -License type (autodetected): Apache-2.0 -./vendor/github.com/golang/glog/LICENSE: --------------------------------------------------------------------- -Apache License 2.0 +Version 2.0, January 2004 +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION --------------------------------------------------------------------- -Dependency: github.com/golang/protobuf -Revision: 6c65a5562fc06764971b7c5d05c76c75e84bdbf7 -License type (autodetected): BSD-3-Clause -./vendor/github.com/golang/protobuf/LICENSE: --------------------------------------------------------------------- -Copyright 2010 The Go Authors. All rights reserved. +1. **Definitions**. -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: + "License" shall mean the terms and conditions for use, reproduction, and + distribution as defined by Sections 1 through 9 of this document. - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. + "Licensor" shall mean the copyright owner or entity authorized by the + copyright owner that is granting the License. -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + "Legal Entity" shall mean the union of the acting entity and all other + entities that control, are controlled by, or are under common control with + that entity. For the purposes of this definition, "control" means (i) the + power, direct or indirect, to cause the direction or management of such + entity, whether by contract or otherwise, or (ii) ownership of fifty + percent (50%) or more of the outstanding shares, or (iii) beneficial + ownership of such entity. + "You" (or "Your") shall mean an individual or Legal Entity exercising + permissions granted by this License. --------------------------------------------------------------------- -Dependency: github.com/golang/snappy -Revision: 553a641470496b2327abcac10b36396bd98e45c9 -License type (autodetected): BSD-3-Clause -./vendor/github.com/golang/snappy/LICENSE: --------------------------------------------------------------------- -Copyright (c) 2011 The Snappy-Go Authors. All rights reserved. + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation source, + and configuration files. -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: + "Object" form shall mean any form resulting from mechanical transformation + or translation of a Source form, including but not limited to compiled + object code, generated documentation, and conversions to other media types. - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. + "Work" shall mean the work of authorship, whether in Source or Object form, + made available under the License, as indicated by a copyright notice that + is included in or attached to the work (an example is provided in the + Appendix below). -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + "Derivative Works" shall mean any work, whether in Source or Object form, + that is based on (or derived from) the Work and for which the editorial + revisions, annotations, elaborations, or other modifications represent, as + a whole, an original work of authorship. For the purposes of this License, + Derivative Works shall not include works that remain separable from, or + merely link (or bind by name) to the interfaces of, the Work and Derivative + Works thereof. + + "Contribution" shall mean any work of authorship, including the original + version of the Work and any modifications or additions to that Work or + Derivative Works thereof, that is intentionally submitted to Licensor for + inclusion in the Work by the copyright owner or by an individual or Legal + Entity authorized to submit on behalf of the copyright owner. For the + purposes of this definition, "submitted" means any form of electronic, + verbal, or written communication sent to the Licensor or its + representatives, including but not limited to communication on electronic + mailing lists, source code control systems, and issue tracking systems that + are managed by, or on behalf of, the Licensor for the purpose of discussing + and improving the Work, but excluding communication that is conspicuously + marked or otherwise designated in writing by the copyright owner as "Not a + Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity on + behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. **Grant of Copyright License.** Subject to the terms and conditions of this + License, each Contributor hereby grants to You a perpetual, worldwide, + non-exclusive, no-charge, royalty-free, irrevocable copyright license to + reproduce, prepare Derivative Works of, publicly display, publicly perform, + sublicense, and distribute the Work and such Derivative Works in Source or + Object form. + +3. **Grant of Patent License.** Subject to the terms and conditions of this + License, each Contributor hereby grants to You a perpetual, worldwide, + non-exclusive, no-charge, royalty-free, irrevocable (except as stated in + this section) patent license to make, have made, use, offer to sell, sell, + import, and otherwise transfer the Work, where such license applies only to + those patent claims licensable by such Contributor that are necessarily + infringed by their Contribution(s) alone or by combination of their + Contribution(s) with the Work to which such Contribution(s) was submitted. + If You institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work or a + Contribution incorporated within the Work constitutes direct or + contributory patent infringement, then any patent licenses granted to You + under this License for that Work shall terminate as of the date such + litigation is filed. + +4. **Redistribution.** You may reproduce and distribute copies of the Work or + Derivative Works thereof in any medium, with or without modifications, and + in Source or Object form, provided that You meet the following conditions: + + 1. You must give any other recipients of the Work or Derivative Works a + copy of this License; and + + 2. You must cause any modified files to carry prominent notices stating + that You changed the files; and + + 3. You must retain, in the Source form of any Derivative Works that You + distribute, all copyright, patent, trademark, and attribution notices + from the Source form of the Work, excluding those notices that do not + pertain to any part of the Derivative Works; and + + 4. If the Work includes a "NOTICE" text file as part of its distribution, + then any Derivative Works that You distribute must include a readable + copy of the attribution notices contained within such NOTICE file, + excluding those notices that do not pertain to any part of the + Derivative Works, in at least one of the following places: within a + NOTICE text file distributed as part of the Derivative Works; within + the Source form or documentation, if provided along with the Derivative + Works; or, within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents of the + NOTICE file are for informational purposes only and do not modify the + License. You may add Your own attribution notices within Derivative + Works that You distribute, alongside or as an addendum to the NOTICE + text from the Work, provided that such additional attribution notices + cannot be construed as modifying the License. + + You may add Your own copyright statement to Your modifications and may + provide additional or different license terms and conditions for use, + reproduction, or distribution of Your modifications, or for any such + Derivative Works as a whole, provided Your use, reproduction, and + distribution of the Work otherwise complies with the conditions stated + in this License. + +5. **Submission of Contributions.** Unless You explicitly state otherwise, any + Contribution intentionally submitted for inclusion in the Work by You to + the Licensor shall be under the terms and conditions of this License, + without any additional terms or conditions. Notwithstanding the above, + nothing herein shall supersede or modify the terms of any separate license + agreement you may have executed with Licensor regarding such Contributions. + +6. **Trademarks.** This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, except + as required for reasonable and customary use in describing the origin of + the Work and reproducing the content of the NOTICE file. + +7. **Disclaimer of Warranty.** Unless required by applicable law or agreed to + in writing, Licensor provides the Work (and each Contributor provides its + Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied, including, without limitation, any + warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or + FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for + determining the appropriateness of using or redistributing the Work and + assume any risks associated with Your exercise of permissions under this + License. + +8. **Limitation of Liability.** In no event and under no legal theory, whether + in tort (including negligence), contract, or otherwise, unless required by + applicable law (such as deliberate and grossly negligent acts) or agreed to + in writing, shall any Contributor be liable to You for damages, including + any direct, indirect, special, incidental, or consequential damages of any + character arising as a result of this License or out of the use or + inability to use the Work (including but not limited to damages for loss of + goodwill, work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor has been + advised of the possibility of such damages. + +9. **Accepting Warranty or Additional Liability.** While redistributing the + Work or Derivative Works thereof, You may choose to offer, and charge a fee + for, acceptance of support, warranty, indemnity, or other liability + obligations and/or rights consistent with this License. However, in + accepting such obligations, You may act only on Your own behalf and on Your + sole responsibility, not on behalf of any other Contributor, and only if + You agree to indemnify, defend, and hold each Contributor harmless for any + liability incurred by, or claims asserted against, such Contributor by + reason of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +-------------------------------------------------------------------- +Dependency: github.com/gofrs/flock +Revision: 5135e617513b1e6e205a3a89b042249dee6730c8 +License type (autodetected): BSD-3-Clause +./vendor/github.com/gofrs/flock/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2015, Tim Heckman +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +* Neither the name of linode-netint nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +-------------------------------------------------------------------- +Dependency: github.com/gofrs/uuid +Version: 3.1.1 +Revision: 47cd1dca1a6e7f807d5a492bd7e7f41d0855b5a1 +License type (autodetected): MIT +./vendor/github.com/gofrs/uuid/LICENSE: +-------------------------------------------------------------------- +Copyright (C) 2013-2018 by Maxim Bublis + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +-------------------------------------------------------------------- +Dependency: github.com/gogo/protobuf +Revision: 4c00d2f19fb91be5fecd8681fa83450a2a979e69 +License type (autodetected): BSD-3-Clause +./vendor/github.com/gogo/protobuf/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2013, The GoGo Authors. All rights reserved. + +Protocol Buffers for Go with Gadgets + +Go support for Protocol Buffers - Google's data interchange format + +Copyright 2010 The Go Authors. All rights reserved. +https://github.com/golang/protobuf + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +-------------------------------------------------------------------- +Dependency: github.com/golang/glog +Revision: 23def4e6c14b4da8ac2ed8007337bc5eb5007998 +License type (autodetected): Apache-2.0 +./vendor/github.com/golang/glog/LICENSE: +-------------------------------------------------------------------- +Apache License 2.0 + + +-------------------------------------------------------------------- +Dependency: github.com/golang/protobuf +Revision: 6c65a5562fc06764971b7c5d05c76c75e84bdbf7 +License type (autodetected): BSD-3-Clause +./vendor/github.com/golang/protobuf/LICENSE: +-------------------------------------------------------------------- +Copyright 2010 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +-------------------------------------------------------------------- +Dependency: github.com/golang/snappy +Revision: 553a641470496b2327abcac10b36396bd98e45c9 +License type (autodetected): BSD-3-Clause +./vendor/github.com/golang/snappy/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2011 The Snappy-Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, @@ -2464,6 +2790,35 @@ License type (autodetected): Apache-2.0 Apache License 2.0 +-------------------------------------------------------------------- +Dependency: github.com/gorilla/websocket +Revision: c3e18be99d19e6b3e8f1559eea2c161a665c4b6b +License type (autodetected): BSD-2-Clause +./vendor/github.com/gorilla/websocket/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2013 The Gorilla WebSocket Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + + Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + -------------------------------------------------------------------- Dependency: github.com/grpc-ecosystem/go-grpc-prometheus Revision: ae0d8660c5f2108ca70a3776dbe0fb53cf79f1da @@ -6128,74 +6483,12 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -------------------------------------------------------------------- -Dependency: google.golang.org/api -Version: v0.14.0 -Revision: 8a410c21381766a810817fd6200fce8838ecb277 -License type (autodetected): BSD-3-Clause -./vendor/google.golang.org/api/LICENSE: --------------------------------------------------------------------- -Copyright (c) 2011 Google Inc. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - --------------------------------------------------------------------- -Dependency: google.golang.org/api/googleapi/internal/uritemplates -Version: v0.7.0 -Revision: 02490b97dff7cfde1995bd77de808fd27053bc87 -License type (autodetected): MIT -./vendor/google.golang.org/api/googleapi/internal/uritemplates/LICENSE: --------------------------------------------------------------------- -Copyright (c) 2013 Joshua Tacoma - -Permission is hereby granted, free of charge, to any person obtaining a copy of -this software and associated documentation files (the "Software"), to deal in -the Software without restriction, including without limitation the rights to -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of -the Software, and to permit persons to whom the Software is furnished to do so, -subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS -FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR -COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER -IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - --------------------------------------------------------------------- -Dependency: google.golang.org/api/internal/third_party/uritemplates -Version: v0.14.0 -Revision: 8a410c21381766a810817fd6200fce8838ecb277 +Dependency: golang.org/x/xerrors +Revision: 9bdfabe68543c54f90421aeb9a60ef8061b5b544 License type (autodetected): BSD-3-Clause -./vendor/google.golang.org/api/internal/third_party/uritemplates/LICENSE: +./vendor/golang.org/x/xerrors/LICENSE: -------------------------------------------------------------------- -Copyright (c) 2013 Joshua Tacoma. All rights reserved. +Copyright (c) 2019 The Go Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are @@ -6216,273 +6509,136 @@ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - --------------------------------------------------------------------- -Dependency: google.golang.org/appengine -Version: v1.6.1 -Revision: b2f4a3cf3c67576a2ee09e1fe62656a5086ce880 -License type (autodetected): Apache-2.0 -./vendor/google.golang.org/appengine/LICENSE: --------------------------------------------------------------------- -Apache License 2.0 - - --------------------------------------------------------------------- -Dependency: google.golang.org/genproto -Revision: 83cc0476cb11ea0da33dacd4c6354ab192de6fe6 -License type (autodetected): Apache-2.0 -./vendor/google.golang.org/genproto/LICENSE: --------------------------------------------------------------------- -Apache License 2.0 - - --------------------------------------------------------------------- -Dependency: google.golang.org/grpc -Version: v1.25.1 -Revision: 1a3960e4bd028ac0cec0a2afd27d7d8e67c11514 -License type (autodetected): Apache-2.0 -./vendor/google.golang.org/grpc/LICENSE: --------------------------------------------------------------------- -Apache License 2.0 - - --------------------------------------------------------------------- -Dependency: gopkg.in/goracle.v2 -Revision: 3222d7159b45fce95150f06a57e1bcc2868108d3 -License type (autodetected): Apache-2.0 -./vendor/gopkg.in/goracle.v2/LICENSE.md: --------------------------------------------------------------------- -Apache License 2.0 - - --------------------------------------------------------------------- -Dependency: gopkg.in/goracle.v2/odpi -License type (autodetected): UPL-1.0 -./vendor/gopkg.in/goracle.v2/odpi/LICENSE.md: --------------------------------------------------------------------- -Copyright (c) 2016, 2018 Oracle and/or its affiliates. All rights reserved. - -This program is free software: you can modify it and/or redistribute it under -the terms of: - -(i) the Universal Permissive License v 1.0 or at your option, any - later version (); and/or - -(ii) the Apache License v 2.0. () - - -The Universal Permissive License (UPL), Version 1.0 -=================================================== - -Subject to the condition set forth below, permission is hereby granted to any -person obtaining a copy of this software, associated documentation and/or data -(collectively the "Software"), free of charge and under any and all copyright -rights in the Software, and any and all patent rights owned or freely -licensable by each licensor hereunder covering either (i) the unmodified -Software as contributed to or provided by such licensor, or (ii) the Larger -Works (as defined below), to deal in both - -(a) the Software, and - -(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if - one is included with the Software (each a "Larger Work" to which the - Software is contributed by such licensors), - -without restriction, including without limitation the rights to copy, create -derivative works of, display, perform, and distribute the Software and make, -use, sell, offer for sale, import, export, have made, and have sold the -Software and the Larger Work(s), and to sublicense the foregoing rights on -either these or other terms. - -This license is subject to the following condition: - -The above copyright notice and either this complete permission notice or at a -minimum a reference to the UPL must be included in all copies or substantial -portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - - -Apache License -============== - -Version 2.0, January 2004 - -TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - -1. **Definitions**. - - "License" shall mean the terms and conditions for use, reproduction, and - distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by the - copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all other - entities that control, are controlled by, or are under common control with - that entity. For the purposes of this definition, "control" means (i) the - power, direct or indirect, to cause the direction or management of such - entity, whether by contract or otherwise, or (ii) ownership of fifty - percent (50%) or more of the outstanding shares, or (iii) beneficial - ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity exercising - permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation source, - and configuration files. - - "Object" form shall mean any form resulting from mechanical transformation - or translation of a Source form, including but not limited to compiled - object code, generated documentation, and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or Object form, - made available under the License, as indicated by a copyright notice that - is included in or attached to the work (an example is provided in the - Appendix below). +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - "Derivative Works" shall mean any work, whether in Source or Object form, - that is based on (or derived from) the Work and for which the editorial - revisions, annotations, elaborations, or other modifications represent, as - a whole, an original work of authorship. For the purposes of this License, - Derivative Works shall not include works that remain separable from, or - merely link (or bind by name) to the interfaces of, the Work and Derivative - Works thereof. +-------------------------------------------------------------------- +Dependency: google.golang.org/api +Version: v0.7.0 +Revision: 02490b97dff7cfde1995bd77de808fd27053bc87 +License type (autodetected): BSD-3-Clause +./vendor/google.golang.org/api/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2011 Google Inc. All rights reserved. - "Contribution" shall mean any work of authorship, including the original - version of the Work and any modifications or additions to that Work or - Derivative Works thereof, that is intentionally submitted to Licensor for - inclusion in the Work by the copyright owner or by an individual or Legal - Entity authorized to submit on behalf of the copyright owner. For the - purposes of this definition, "submitted" means any form of electronic, - verbal, or written communication sent to the Licensor or its - representatives, including but not limited to communication on electronic - mailing lists, source code control systems, and issue tracking systems that - are managed by, or on behalf of, the Licensor for the purpose of discussing - and improving the Work, but excluding communication that is conspicuously - marked or otherwise designated in writing by the copyright owner as "Not a - Contribution." +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: - "Contributor" shall mean Licensor and any individual or Legal Entity on - behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. -2. **Grant of Copyright License.** Subject to the terms and conditions of this - License, each Contributor hereby grants to You a perpetual, worldwide, - non-exclusive, no-charge, royalty-free, irrevocable copyright license to - reproduce, prepare Derivative Works of, publicly display, publicly perform, - sublicense, and distribute the Work and such Derivative Works in Source or - Object form. +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -3. **Grant of Patent License.** Subject to the terms and conditions of this - License, each Contributor hereby grants to You a perpetual, worldwide, - non-exclusive, no-charge, royalty-free, irrevocable (except as stated in - this section) patent license to make, have made, use, offer to sell, sell, - import, and otherwise transfer the Work, where such license applies only to - those patent claims licensable by such Contributor that are necessarily - infringed by their Contribution(s) alone or by combination of their - Contribution(s) with the Work to which such Contribution(s) was submitted. - If You institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work or a - Contribution incorporated within the Work constitutes direct or - contributory patent infringement, then any patent licenses granted to You - under this License for that Work shall terminate as of the date such - litigation is filed. +-------------------------------------------------------------------- +Dependency: google.golang.org/api/googleapi/internal/uritemplates +Version: v0.7.0 +Revision: 02490b97dff7cfde1995bd77de808fd27053bc87 +License type (autodetected): MIT +./vendor/google.golang.org/api/googleapi/internal/uritemplates/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2013 Joshua Tacoma -4. **Redistribution.** You may reproduce and distribute copies of the Work or - Derivative Works thereof in any medium, with or without modifications, and - in Source or Object form, provided that You meet the following conditions: +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software is furnished to do so, +subject to the following conditions: - 1. You must give any other recipients of the Work or Derivative Works a - copy of this License; and +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. - 2. You must cause any modified files to carry prominent notices stating - that You changed the files; and +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS +FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER +IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - 3. You must retain, in the Source form of any Derivative Works that You - distribute, all copyright, patent, trademark, and attribution notices - from the Source form of the Work, excluding those notices that do not - pertain to any part of the Derivative Works; and +-------------------------------------------------------------------- +Dependency: google.golang.org/api/internal/third_party/uritemplates +Version: v0.14.0 +Revision: 8a410c21381766a810817fd6200fce8838ecb277 +License type (autodetected): BSD-3-Clause +./vendor/google.golang.org/api/internal/third_party/uritemplates/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2013 Joshua Tacoma. All rights reserved. - 4. If the Work includes a "NOTICE" text file as part of its distribution, - then any Derivative Works that You distribute must include a readable - copy of the attribution notices contained within such NOTICE file, - excluding those notices that do not pertain to any part of the - Derivative Works, in at least one of the following places: within a - NOTICE text file distributed as part of the Derivative Works; within - the Source form or documentation, if provided along with the Derivative - Works; or, within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents of the - NOTICE file are for informational purposes only and do not modify the - License. You may add Your own attribution notices within Derivative - Works that You distribute, alongside or as an addendum to the NOTICE - text from the Work, provided that such additional attribution notices - cannot be construed as modifying the License. +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: - You may add Your own copyright statement to Your modifications and may - provide additional or different license terms and conditions for use, - reproduction, or distribution of Your modifications, or for any such - Derivative Works as a whole, provided Your use, reproduction, and - distribution of the Work otherwise complies with the conditions stated - in this License. + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. -5. **Submission of Contributions.** Unless You explicitly state otherwise, any - Contribution intentionally submitted for inclusion in the Work by You to - the Licensor shall be under the terms and conditions of this License, - without any additional terms or conditions. Notwithstanding the above, - nothing herein shall supersede or modify the terms of any separate license - agreement you may have executed with Licensor regarding such Contributions. +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -6. **Trademarks.** This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, except - as required for reasonable and customary use in describing the origin of - the Work and reproducing the content of the NOTICE file. +-------------------------------------------------------------------- +Dependency: google.golang.org/appengine +Version: v1.6.1 +Revision: b2f4a3cf3c67576a2ee09e1fe62656a5086ce880 +License type (autodetected): Apache-2.0 +./vendor/google.golang.org/appengine/LICENSE: +-------------------------------------------------------------------- +Apache License 2.0 -7. **Disclaimer of Warranty.** Unless required by applicable law or agreed to - in writing, Licensor provides the Work (and each Contributor provides its - Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - KIND, either express or implied, including, without limitation, any - warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or - FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for - determining the appropriateness of using or redistributing the Work and - assume any risks associated with Your exercise of permissions under this - License. -8. **Limitation of Liability.** In no event and under no legal theory, whether - in tort (including negligence), contract, or otherwise, unless required by - applicable law (such as deliberate and grossly negligent acts) or agreed to - in writing, shall any Contributor be liable to You for damages, including - any direct, indirect, special, incidental, or consequential damages of any - character arising as a result of this License or out of the use or - inability to use the Work (including but not limited to damages for loss of - goodwill, work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor has been - advised of the possibility of such damages. +-------------------------------------------------------------------- +Dependency: google.golang.org/genproto +Revision: 83cc0476cb11ea0da33dacd4c6354ab192de6fe6 +License type (autodetected): Apache-2.0 +./vendor/google.golang.org/genproto/LICENSE: +-------------------------------------------------------------------- +Apache License 2.0 -9. **Accepting Warranty or Additional Liability.** While redistributing the - Work or Derivative Works thereof, You may choose to offer, and charge a fee - for, acceptance of support, warranty, indemnity, or other liability - obligations and/or rights consistent with this License. However, in - accepting such obligations, You may act only on Your own behalf and on Your - sole responsibility, not on behalf of any other Contributor, and only if - You agree to indemnify, defend, and hold each Contributor harmless for any - liability incurred by, or claims asserted against, such Contributor by - reason of your accepting any such warranty or additional liability. -END OF TERMS AND CONDITIONS +-------------------------------------------------------------------- +Dependency: google.golang.org/grpc +Version: v1.25.1 +Revision: 1a3960e4bd028ac0cec0a2afd27d7d8e67c11514 +License type (autodetected): Apache-2.0 +./vendor/google.golang.org/grpc/LICENSE: +-------------------------------------------------------------------- +Apache License 2.0 -------------------------------------------------------------------- @@ -6654,6 +6810,40 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +-------------------------------------------------------------------- +Dependency: gopkg.in/vmihailenco/msgpack.v2 +Revision: f4f8982de4ef0de18be76456617cc3f5d8d8141e +License type (autodetected): BSD-3-Clause +./vendor/gopkg.in/vmihailenco/msgpack.v2/LICENSE: +-------------------------------------------------------------------- +Copyright (c) 2013 The msgpack for Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + -------------------------------------------------------------------- Dependency: gopkg.in/yaml.v2 Revision: 5420a8b6744d3b0345ab293f6fcba19c978f1183 diff --git a/auditbeat/auditbeat.reference.yml b/auditbeat/auditbeat.reference.yml index 33b81cc0edfb..db88d004264f 100644 --- a/auditbeat/auditbeat.reference.yml +++ b/auditbeat/auditbeat.reference.yml @@ -1337,6 +1337,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/auditbeat/docs/auditbeat-options.asciidoc b/auditbeat/docs/auditbeat-options.asciidoc new file mode 100644 index 000000000000..8233f79cee1e --- /dev/null +++ b/auditbeat/docs/auditbeat-options.asciidoc @@ -0,0 +1,56 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Auditbeat modules. Make sure you keep the +//// descriptions generic enough to work for all modules. To include +//// this file, use: +//// +//// include::{docdir}/auditbeat-options.asciidoc[] +//// +////////////////////////////////////////////////////////////////////////// + +[id="module-standard-options-{modulename}"] +[float] +==== Standard configuration options + +You can specify the following options for any {beatname_uc} module. + +*`module`*:: The name of the module to run. + +ifeval::["{modulename}"=="system"] +*`datasets`*:: A list of datasets to execute. +endif::[] + +*`enabled`*:: A Boolean value that specifies whether the module is enabled. + +ifeval::["{modulename}"=="system"] +*`period`*:: The frequency at which the datasets check for changes. If a system +is not reachable, {beatname_uc} returns an error for each period. This setting +is required. For most datasets, especially `process` and `socket`, a shorter +period is recommended. +endif::[] + +*`fields`*:: A dictionary of fields that will be sent with the dataset event. This setting +is optional. + +*`tags`*:: A list of tags that will be sent with the dataset event. This setting is +optional. + +*`processors`*:: A list of processors to apply to the data generated by the dataset. ++ +See <> for information about specifying +processors in your config. + +*`index`*:: If present, this formatted string overrides the index for events from this +module (for elasticsearch outputs), or sets the `raw_index` field of the event's +metadata (for other outputs). This string can only refer to the agent name and +version and the event timestamp; for access to dynamic fields, use +`output.elasticsearch.index` or a processor. ++ +Example value: `"%{[agent.name]}-myindex-%{+yyyy.MM.dd}"` might +expand to +"{beatname_lc}-myindex-2019.12.13"+. + +*`keep_null`*:: If this option is set to true, fields with `null` values will be published in +the output document. By default, `keep_null` is set to `false`. + +*`service.name`*:: A name given by the user to the service the data is collected from. It can be +used for example to identify information collected from nodes of different +clusters with the same `service.type`. diff --git a/auditbeat/docs/modules/auditd.asciidoc b/auditbeat/docs/modules/auditd.asciidoc index c419c66db39d..e913160d9cbb 100644 --- a/auditbeat/docs/modules/auditd.asciidoc +++ b/auditbeat/docs/modules/auditd.asciidoc @@ -2,6 +2,8 @@ This file is generated! See scripts/docs_collector.py //// +:modulename: auditd + [id="{beatname_lc}-module-auditd"] == Auditd Module @@ -135,6 +137,10 @@ following example shows all configuration options with their default values. backpressure_strategy: auto ---- +This module also supports the +<> +described later. + *`socket_type`*:: This optional setting controls the type of socket that {beatname_uc} uses to receive events from the kernel. The two options are `unicast` and `multicast`. @@ -189,7 +195,8 @@ setting is primarily used for development and debugging purposes. installed to the kernel. There should be one rule per line. Comments can be embedded in the string using `#` as a prefix. The format for rules is the same used by the Linux `auditctl` utility. {beatname_uc} supports adding file watches -(`-w`) and syscall rules (`-a` or `-A`). +(`-w`) and syscall rules (`-a` or `-A`). For more information, see +<>. *`audit_rule_files`*:: A list of files to load audit rules from. This files are loaded after the rules declared in `audit_rules` are loaded. Wildcards are @@ -218,10 +225,10 @@ time. - `none`: No backpressure mitigation measures are enabled. -- -*`keep_null`*:: If this option is set to true, fields with `null` values will be -published in the output document. By default, `keep_null` is set to `false`. +include::{docdir}/auditbeat-options.asciidoc[] [float] +[[audit-rules]] === Audit rules The audit rules are where you configure the activities that are audited. These @@ -304,3 +311,6 @@ auditbeat.modules: ---- + +:modulename!: + diff --git a/auditbeat/docs/modules/file_integrity.asciidoc b/auditbeat/docs/modules/file_integrity.asciidoc index c420818cb395..42f0378de64a 100644 --- a/auditbeat/docs/modules/file_integrity.asciidoc +++ b/auditbeat/docs/modules/file_integrity.asciidoc @@ -2,6 +2,8 @@ This file is generated! See scripts/docs_collector.py //// +:modulename: file_integrity + [id="{beatname_lc}-module-file_integrity"] == File Integrity Module @@ -66,6 +68,10 @@ Linux. recursive: false ---- +This module also supports the +<> +described later. + *`paths`*:: A list of paths (directories or files) to watch. Globs are not supported. The specified paths should exist when the metricset is started. @@ -122,8 +128,7 @@ of this directories are watched. If `recursive` is set to `true`, the `file_integrity` module will watch for changes on this directories and all their subdirectories. -*`keep_null`*:: If this option is set to true, fields with `null` values will be -published in the output document. By default, `keep_null` is set to `false`. +include::{docdir}/auditbeat-options.asciidoc[] [float] @@ -146,3 +151,6 @@ auditbeat.modules: ---- + +:modulename!: + diff --git a/auditbeat/module/auditd/_meta/docs.asciidoc b/auditbeat/module/auditd/_meta/docs.asciidoc index 45e3e3de9348..4585b7179ff9 100644 --- a/auditbeat/module/auditd/_meta/docs.asciidoc +++ b/auditbeat/module/auditd/_meta/docs.asciidoc @@ -130,6 +130,10 @@ following example shows all configuration options with their default values. backpressure_strategy: auto ---- +This module also supports the +<> +described later. + *`socket_type`*:: This optional setting controls the type of socket that {beatname_uc} uses to receive events from the kernel. The two options are `unicast` and `multicast`. @@ -184,7 +188,8 @@ setting is primarily used for development and debugging purposes. installed to the kernel. There should be one rule per line. Comments can be embedded in the string using `#` as a prefix. The format for rules is the same used by the Linux `auditctl` utility. {beatname_uc} supports adding file watches -(`-w`) and syscall rules (`-a` or `-A`). +(`-w`) and syscall rules (`-a` or `-A`). For more information, see +<>. *`audit_rule_files`*:: A list of files to load audit rules from. This files are loaded after the rules declared in `audit_rules` are loaded. Wildcards are @@ -213,10 +218,10 @@ time. - `none`: No backpressure mitigation measures are enabled. -- -*`keep_null`*:: If this option is set to true, fields with `null` values will be -published in the output document. By default, `keep_null` is set to `false`. +include::{docdir}/auditbeat-options.asciidoc[] [float] +[[audit-rules]] === Audit rules The audit rules are where you configure the activities that are audited. These diff --git a/auditbeat/module/file_integrity/_meta/docs.asciidoc b/auditbeat/module/file_integrity/_meta/docs.asciidoc index 9282b289589a..372e9fc3b47f 100644 --- a/auditbeat/module/file_integrity/_meta/docs.asciidoc +++ b/auditbeat/module/file_integrity/_meta/docs.asciidoc @@ -61,6 +61,10 @@ Linux. recursive: false ---- +This module also supports the +<> +described later. + *`paths`*:: A list of paths (directories or files) to watch. Globs are not supported. The specified paths should exist when the metricset is started. @@ -117,5 +121,4 @@ of this directories are watched. If `recursive` is set to `true`, the `file_integrity` module will watch for changes on this directories and all their subdirectories. -*`keep_null`*:: If this option is set to true, fields with `null` values will be -published in the output document. By default, `keep_null` is set to `false`. +include::{docdir}/auditbeat-options.asciidoc[] diff --git a/auditbeat/scripts/docs_collector.py b/auditbeat/scripts/docs_collector.py index 8c5d532ae8e3..5e897bde3edf 100644 --- a/auditbeat/scripts/docs_collector.py +++ b/auditbeat/scripts/docs_collector.py @@ -44,6 +44,9 @@ def collect(base_paths): os.mkdir(os.path.join(module_docs_path(module_dir), "modules", module)) module_file = generated_note + + module_file += ":modulename: " + module + "\n\n" + module_file += "[id=\"{beatname_lc}-module-" + module + "\"]\n" with open(module_doc) as f: @@ -84,6 +87,9 @@ def collect(base_paths): module_file += "----\n\n" + # Close modulename variable + module_file += "\n:modulename!:\n\n" + module_links = "" module_includes = "" diff --git a/dev-tools/generate_notice.py b/dev-tools/generate_notice.py index 645b7ad65ffc..ba6d65f0d0c9 100644 --- a/dev-tools/generate_notice.py +++ b/dev-tools/generate_notice.py @@ -309,6 +309,10 @@ def create_notice(filename, beat, copyright, vendor_dirs, csvfile, overrides=Non "Creative Commons Attribution-ShareAlike 4.0 International" ] +ECLIPSE_PUBLIC_LICENSE_TITLES = [ + "Eclipse Public License - v 1.0" +] + LGPL_3_LICENSE_TITLE = [ "GNU LESSER GENERAL PUBLIC LICENSE Version 3" ] @@ -348,16 +352,18 @@ def detect_license_summary(content): return "LGPL-3.0" if any(sentence in content[0:1500] for sentence in UNIVERSAL_PERMISSIVE_LICENSE_TITLES): return "UPL-1.0" - + if any(sentence in content[0:1500] for sentence in ECLIPSE_PUBLIC_LICENSE_TITLES): + return "EPL-1.0" return "UNKNOWN" ACCEPTED_LICENSES = [ "Apache-2.0", - "MIT", "BSD-4-Clause", "BSD-3-Clause", "BSD-2-Clause", + "EPL-1.0", + "MIT", "MPL-2.0", "UPL-1.0", ] diff --git a/filebeat/docs/fields.asciidoc b/filebeat/docs/fields.asciidoc index df71321f4f61..98dc6ff13e37 100644 --- a/filebeat/docs/fields.asciidoc +++ b/filebeat/docs/fields.asciidoc @@ -17090,7 +17090,7 @@ type: long *`netflow.class_id`*:: + -- -type: short +type: long -- diff --git a/filebeat/filebeat.reference.yml b/filebeat/filebeat.reference.yml index 80cad336d50a..0ccbe5ad6add 100644 --- a/filebeat/filebeat.reference.yml +++ b/filebeat/filebeat.reference.yml @@ -2034,6 +2034,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/filebeat/include/list.go b/filebeat/include/list.go index 72d70d9ae242..acaa460cfdfb 100644 --- a/filebeat/include/list.go +++ b/filebeat/include/list.go @@ -25,6 +25,7 @@ import ( _ "github.com/elastic/beats/filebeat/input/docker" _ "github.com/elastic/beats/filebeat/input/kafka" _ "github.com/elastic/beats/filebeat/input/log" + _ "github.com/elastic/beats/filebeat/input/mqtt" _ "github.com/elastic/beats/filebeat/input/redis" _ "github.com/elastic/beats/filebeat/input/stdin" _ "github.com/elastic/beats/filebeat/input/syslog" diff --git a/filebeat/input/mqtt/client.go b/filebeat/input/mqtt/client.go index 9f09c1ce0a4d..0079dbf78f48 100644 --- a/filebeat/input/mqtt/client.go +++ b/filebeat/input/mqtt/client.go @@ -25,10 +25,11 @@ import ( "strings" "time" + "gopkg.in/vmihailenco/msgpack.v2" + "github.com/elastic/beats/libbeat/beat" "github.com/elastic/beats/libbeat/common" "github.com/elastic/beats/libbeat/logp" - "gopkg.in/vmihailenco/msgpack.v2" MQTT "github.com/eclipse/paho.mqtt.golang" ) @@ -157,9 +158,9 @@ func (input *mqttInput) onMessage(client MQTT.Client, msg MQTT.Message) { // Finally sending the message to elasticsearch beatEvent.Fields = eventFields - input.outlet.OnEvent(beatEvent) + isSent := input.outlet.OnEvent(beatEvent) - logp.Debug("MQTT", "Event sent: %t") + logp.Debug("MQTT", "Event sent: %t", isSent) } // connectionLostHandler will try to reconnect when connection is lost diff --git a/heartbeat/docs/heartbeat-options.asciidoc b/heartbeat/docs/heartbeat-options.asciidoc index f72e631781d2..6becc27a7a99 100644 --- a/heartbeat/docs/heartbeat-options.asciidoc +++ b/heartbeat/docs/heartbeat-options.asciidoc @@ -34,7 +34,7 @@ heartbeat.monitors: - type: http schedule: '@every 5s' hosts: ["http://localhost:80/service/status"] - check.response.status: 200 + check.response.status: [200] heartbeat.scheduler: limit: 10 ---------------------------------------------------------------------- @@ -69,7 +69,7 @@ monitor definitions only, e.g. what is normally under the `heartbeat.monitors` s - type: http schedule: '@every 5s' hosts: ["http://localhost:80/service/status"] - check.response.status: 200 + check.response.status: [200] ---------------------------------------------------------------------- [float] @@ -429,7 +429,7 @@ The username for authenticating with the server. The credentials are passed with the request. This setting is optional. You need to specify credentials when your `check.response` settings require it. -For example, you can check for a 403 response (`check.response.status: 403`) +For example, you can check for a 403 response (`check.response.status: [403]`) without setting credentials. [float] @@ -489,7 +489,7 @@ Example configuration: schedule: '@every 5s' hosts: ["http://myhost:80"] check.request.method: HEAD - check.response.status: 200 + check.response.status: [200] ------------------------------------------------------------------------------- @@ -517,7 +517,7 @@ to the endpoint `/demo/add` # urlencode the body: body: "name=first&email=someemail%40someemailprovider.com" check.response: - status: 200 + status: [200] body: - Saved - saved @@ -525,14 +525,14 @@ to the endpoint `/demo/add` Under `check.response`, specify these options: -*`status`*:: The expected status code. 4xx and 5xx codes are considered `down` by default. Other codes are considered `up`. +*`status`*:: A list of expected status codes. 4xx and 5xx codes are considered `down` by default. Other codes are considered `up`. *`headers`*:: The required response headers. *`body`*:: A list of regular expressions to match the the body output. Only a single expression needs to match. HTTP response bodies of up to 100MiB are supported. Example configuration: This monitor examines the -response body for the strings `saved` or `Saved` +response body for the strings `saved` or `Saved` and expects 200 or 201 status codes [source,yaml] ------------------------------------------------------------------------------- @@ -546,7 +546,7 @@ response body for the strings `saved` or `Saved` # urlencode the body: body: "name=first&email=someemail%40someemailprovider.com" check.response: - status: 200 + status: [200, 201] body: - Saved - saved @@ -568,7 +568,7 @@ contains JSON: headers: 'X-API-Key': '12345-mykey-67890' check.response: - status: 200 + status: [200] json: - description: check status condition: @@ -589,7 +589,7 @@ patterns: headers: 'X-API-Key': '12345-mykey-67890' check.response: - status: 200 + status: [200] body: - hello - world @@ -608,7 +608,7 @@ regex: headers: 'X-API-Key': '12345-mykey-67890' check.response: - status: 200 + status: [200] body: '(?s)first.*second.*third' ------------------------------------------------------------------------------- diff --git a/heartbeat/heartbeat.reference.yml b/heartbeat/heartbeat.reference.yml index 8bff01bfe247..be68211d4615 100644 --- a/heartbeat/heartbeat.reference.yml +++ b/heartbeat/heartbeat.reference.yml @@ -1481,6 +1481,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/heartbeat/monitors/active/http/check.go b/heartbeat/monitors/active/http/check.go index 8976cd56304c..4ff5ab4a4702 100644 --- a/heartbeat/monitors/active/http/check.go +++ b/heartbeat/monitors/active/http/check.go @@ -77,7 +77,7 @@ func makeValidateResponse(config *responseParameters) (multiValidator, error) { var respValidators []respValidator var bodyValidators []bodyValidator - if config.Status > 0 { + if len(config.Status) > 0 { respValidators = append(respValidators, checkStatus(config.Status)) } else { respValidators = append(respValidators, checkStatusOK) @@ -102,10 +102,12 @@ func makeValidateResponse(config *responseParameters) (multiValidator, error) { return multiValidator{respValidators, bodyValidators}, nil } -func checkStatus(status uint16) respValidator { +func checkStatus(status []uint16) respValidator { return func(r *http.Response) error { - if r.StatusCode == int(status) { - return nil + for _, v := range status { + if r.StatusCode == int(v) { + return nil + } } return fmt.Errorf("received status code %v expecting %v", r.StatusCode, status) } diff --git a/heartbeat/monitors/active/http/check_test.go b/heartbeat/monitors/active/http/check_test.go index a705ca344541..3c3e42caa060 100644 --- a/heartbeat/monitors/active/http/check_test.go +++ b/heartbeat/monitors/active/http/check_test.go @@ -268,3 +268,62 @@ func TestCheckJsonWithIntegerComparison(t *testing.T) { } } + +func TestCheckStatus(t *testing.T) { + + var matchTests = []struct { + description string + status []uint16 + statusRec int + result bool + }{ + { + "not match multiple values", + []uint16{200, 301, 302}, + 500, + false, + }, + { + "match multiple values", + []uint16{200, 301, 302}, + 200, + true, + }, + { + "not match single value", + []uint16{200}, + 201, + false, + }, + { + "match single value", + []uint16{200}, + 200, + true, + }, + } + + for _, test := range matchTests { + t.Run(test.description, func(t *testing.T) { + ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(test.statusRec) + })) + defer ts.Close() + + res, err := http.Get(ts.URL) + if err != nil { + log.Fatal(err) + } + + check := checkStatus(test.status)(res) + + if result := (check == nil); result != test.result { + if test.result { + t.Fatalf("Expected at least one of status: %d to match status: %d", test.status, test.statusRec) + } else { + t.Fatalf("Did not expect status: %d to match status: %d", test.status, test.statusRec) + } + } + }) + } +} diff --git a/heartbeat/monitors/active/http/config.go b/heartbeat/monitors/active/http/config.go index 033d8fab6285..e195f4209e3e 100644 --- a/heartbeat/monitors/active/http/config.go +++ b/heartbeat/monitors/active/http/config.go @@ -74,7 +74,7 @@ type requestParameters struct { type responseParameters struct { // expected HTTP response configuration - Status uint16 `config:"status" verify:"min=0, max=699"` + Status []uint16 `config:"status"` RecvHeaders map[string]string `config:"headers"` RecvBody []match.Matcher `config:"body"` RecvJSON []*jsonResponseCheck `config:"json"` @@ -105,7 +105,6 @@ var defaultConfig = Config{ SendBody: "", }, Response: responseParameters{ - Status: 0, RecvHeaders: nil, RecvBody: []match.Matcher{}, RecvJSON: nil, diff --git a/journalbeat/journalbeat.reference.yml b/journalbeat/journalbeat.reference.yml index 424a38c419bf..4f5ddb75c0ee 100644 --- a/journalbeat/journalbeat.reference.yml +++ b/journalbeat/journalbeat.reference.yml @@ -1275,6 +1275,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/libbeat/_meta/config.reference.yml.tmpl b/libbeat/_meta/config.reference.yml.tmpl index 685adc5dd65d..9f7a76ffe0ff 100644 --- a/libbeat/_meta/config.reference.yml.tmpl +++ b/libbeat/_meta/config.reference.yml.tmpl @@ -1218,6 +1218,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/libbeat/cmd/instance/imports_common.go b/libbeat/cmd/instance/imports_common.go index 6b78ffc7d567..d8fa4b9f0cfa 100644 --- a/libbeat/cmd/instance/imports_common.go +++ b/libbeat/cmd/instance/imports_common.go @@ -24,6 +24,7 @@ import ( _ "github.com/elastic/beats/libbeat/processors/actions" // Register default processors. _ "github.com/elastic/beats/libbeat/processors/add_cloud_metadata" _ "github.com/elastic/beats/libbeat/processors/add_host_metadata" + _ "github.com/elastic/beats/libbeat/processors/add_id" _ "github.com/elastic/beats/libbeat/processors/add_locale" _ "github.com/elastic/beats/libbeat/processors/add_observer_metadata" _ "github.com/elastic/beats/libbeat/processors/add_process_metadata" @@ -32,6 +33,7 @@ import ( _ "github.com/elastic/beats/libbeat/processors/dissect" _ "github.com/elastic/beats/libbeat/processors/dns" _ "github.com/elastic/beats/libbeat/processors/extract_array" + _ "github.com/elastic/beats/libbeat/processors/fingerprint" _ "github.com/elastic/beats/libbeat/processors/registered_domain" _ "github.com/elastic/beats/libbeat/publisher/includes" // Register publisher pipeline modules ) diff --git a/libbeat/common/config.go b/libbeat/common/config.go index 7277d364f06b..2f779d996bbf 100644 --- a/libbeat/common/config.go +++ b/libbeat/common/config.go @@ -66,18 +66,6 @@ const ( selectorConfigWithPassword = "config-with-passwords" ) -var debugBlacklist = MakeStringSet( - "password", - "passphrase", - "key_passphrase", - "pass", - "proxy_url", - "url", - "urls", - "host", - "hosts", -) - // make hasSelector and configDebugf available for unit testing var hasSelector = logp.HasSelector var configDebugf = logp.Debug @@ -369,7 +357,7 @@ func DebugString(c *Config, filterPrivate bool) string { return fmt.Sprintf(" %v", err) } if filterPrivate { - filterDebugObject(content) + applyLoggingMask(content) } j, _ := json.MarshalIndent(content, "", " ") bufs = append(bufs, string(j)) @@ -380,7 +368,7 @@ func DebugString(c *Config, filterPrivate bool) string { return fmt.Sprintf(" %v", err) } if filterPrivate { - filterDebugObject(content) + applyLoggingMask(content) } j, _ := json.MarshalIndent(content, "", " ") bufs = append(bufs, string(j)) @@ -392,30 +380,6 @@ func DebugString(c *Config, filterPrivate bool) string { return strings.Join(bufs, "\n") } -func filterDebugObject(c interface{}) { - switch cfg := c.(type) { - case map[string]interface{}: - for k, v := range cfg { - if debugBlacklist.Has(k) { - if arr, ok := v.([]interface{}); ok { - for i := range arr { - arr[i] = "xxxxx" - } - } else { - cfg[k] = "xxxxx" - } - } else { - filterDebugObject(v) - } - } - - case []interface{}: - for _, elem := range cfg { - filterDebugObject(elem) - } - } -} - // OwnerHasExclusiveWritePerms asserts that the current user or root is the // owner of the config file and that the config file is (at most) writable by // the owner or root (e.g. group and other cannot have write access). diff --git a/libbeat/common/logging.go b/libbeat/common/logging.go new file mode 100644 index 000000000000..2c5f656abd48 --- /dev/null +++ b/libbeat/common/logging.go @@ -0,0 +1,54 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package common + +var maskList = MakeStringSet( + "password", + "passphrase", + "key_passphrase", + "pass", + "proxy_url", + "url", + "urls", + "host", + "hosts", +) + +func applyLoggingMask(c interface{}) { + switch cfg := c.(type) { + case map[string]interface{}: + for k, v := range cfg { + if maskList.Has(k) { + if arr, ok := v.([]interface{}); ok { + for i := range arr { + arr[i] = "xxxxx" + } + } else { + cfg[k] = "xxxxx" + } + } else { + applyLoggingMask(v) + } + } + + case []interface{}: + for _, elem := range cfg { + applyLoggingMask(elem) + } + } +} diff --git a/libbeat/common/mapstr.go b/libbeat/common/mapstr.go index ca2a31d50b39..8cf589b0f04a 100644 --- a/libbeat/common/mapstr.go +++ b/libbeat/common/mapstr.go @@ -20,6 +20,7 @@ package common import ( "encoding/json" "fmt" + "io" "sort" "strings" @@ -218,13 +219,16 @@ func (m MapStr) MarshalLogObject(enc zapcore.ObjectEncoder) error { return nil } - keys := make([]string, 0, len(m)) - for k := range m { + debugM := m.Clone() + applyLoggingMask(map[string]interface{}(debugM)) + + keys := make([]string, 0, len(debugM)) + for k := range debugM { keys = append(keys, k) } sort.Strings(keys) for _, k := range keys { - v := m[k] + v := debugM[k] if inner, ok := tryToMapStr(v); ok { enc.AddObject(k, inner) continue @@ -234,6 +238,19 @@ func (m MapStr) MarshalLogObject(enc zapcore.ObjectEncoder) error { return nil } +// Format implements fmt.Formatter +func (m MapStr) Format(f fmt.State, c rune) { + if f.Flag('+') || f.Flag('#') { + io.WriteString(f, m.String()) + return + } + + debugM := m.Clone() + applyLoggingMask(map[string]interface{}(debugM)) + + io.WriteString(f, debugM.String()) +} + // Flatten flattens the given MapStr and returns a flat MapStr. // // Example: diff --git a/libbeat/common/mapstr_test.go b/libbeat/common/mapstr_test.go index 34223344d80c..784814bdac38 100644 --- a/libbeat/common/mapstr_test.go +++ b/libbeat/common/mapstr_test.go @@ -1002,3 +1002,26 @@ func BenchmarkWalkMap(b *testing.B) { } }) } + +func TestFormat(t *testing.T) { + input := MapStr{ + "foo": "bar", + "password": "SUPER_SECURE", + } + + tests := map[string]string{ + "%v": `{"foo":"bar","password":"xxxxx"}`, + "%+v": `{"foo":"bar","password":"SUPER_SECURE"}`, + "%#v": `{"foo":"bar","password":"SUPER_SECURE"}`, + "%s": `{"foo":"bar","password":"xxxxx"}`, + "%+s": `{"foo":"bar","password":"SUPER_SECURE"}`, + "%#s": `{"foo":"bar","password":"SUPER_SECURE"}`, + } + + for verb, expected := range tests { + t.Run(verb, func(t *testing.T) { + actual := fmt.Sprintf(verb, input) + assert.Equal(t, expected, actual) + }) + } +} diff --git a/libbeat/docs/release.asciidoc b/libbeat/docs/release.asciidoc index 290369a6363b..19d710515e9a 100644 --- a/libbeat/docs/release.asciidoc +++ b/libbeat/docs/release.asciidoc @@ -8,6 +8,15 @@ This section summarizes the changes in each release. Also read <> for more detail about changes that affect upgrade. +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> * <> * <> * <> @@ -18,6 +27,9 @@ upgrade. * <> * <> * <> +* <> +* <> +* <> * <> * <> * <> diff --git a/libbeat/docs/security/users.asciidoc b/libbeat/docs/security/users.asciidoc index d8dff16a15f3..f69ac1658a69 100644 --- a/libbeat/docs/security/users.asciidoc +++ b/libbeat/docs/security/users.asciidoc @@ -214,8 +214,9 @@ endif::serverless[] ==== Grant privileges and roles needed for publishing Users who publish events to {es} need to create and write to {beatname_uc} -indices. To minimize the privileges required by the writer role, you can use the -<> to pre-load dependencies. +indices. To minimize the privileges required by the writer role, use the +<> to pre-load dependencies. This section +assumes that you've pre-loaded dependencies. ifndef::no_ilm[] When using ILM, turn off the ILM setup check in the {beatname_uc} config file before diff --git a/libbeat/docs/shared-ilm.asciidoc b/libbeat/docs/shared-ilm.asciidoc index 12069381c2c1..4b96dcdbf276 100644 --- a/libbeat/docs/shared-ilm.asciidoc +++ b/libbeat/docs/shared-ilm.asciidoc @@ -66,7 +66,7 @@ Date math is supported in this setting. For example: [source,yaml] ---- -setup.ilm.pattern: "{now/M{YYYY.MM}}-000001" +setup.ilm.pattern: "{now/M{yyyy.MM}}-000001" ---- For more information, see diff --git a/libbeat/outputs/console/docs/console.asciidoc b/libbeat/outputs/console/docs/console.asciidoc index 102eb13c7c83..71f03f0f740a 100644 --- a/libbeat/outputs/console/docs/console.asciidoc +++ b/libbeat/outputs/console/docs/console.asciidoc @@ -7,6 +7,11 @@ The Console output writes events in JSON format to stdout. +To use this output, edit the {beatname_uc} configuration file to disable the {es} +output by commenting it out, and enable the console output by adding `output.console`. + +Example configuration: + [source,yaml] ------------------------------------------------------------------------------ output.console: @@ -15,7 +20,14 @@ output.console: ==== Configuration options -You can specify the following options in the `console` section of the +{beatname_lc}.yml+ config file: +You can specify the following `output.console` options in the +{beatname_lc}.yml+ config file: + +===== `enabled` + +The enabled config is a boolean setting to enable or disable the output. If set +to false, the output is disabled. + +The default value is `true`. ===== `pretty` @@ -27,14 +39,6 @@ Output codec configuration. If the `codec` section is missing, events will be js See <> for more information. - -===== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is true. - ===== `bulk_max_size` The maximum number of events to buffer internally during publishing. The default is 2048. diff --git a/libbeat/outputs/fileout/docs/fileout.asciidoc b/libbeat/outputs/fileout/docs/fileout.asciidoc index aa0af27bb66c..6922f8f71426 100644 --- a/libbeat/outputs/fileout/docs/fileout.asciidoc +++ b/libbeat/outputs/fileout/docs/fileout.asciidoc @@ -9,6 +9,11 @@ The File output dumps the transactions into a file where each transaction is in Currently, this output is used for testing, but it can be used as input for Logstash. +To use this output, edit the {beatname_uc} configuration file to disable the {es} +output by commenting it out, and enable the file output by adding `output.file`. + +Example configuration: + ["source","yaml",subs="attributes"] ------------------------------------------------------------------------------ output.file: @@ -21,14 +26,14 @@ output.file: ==== Configuration options -You can specify the following options in the `file` section of the +{beatname_lc}.yml+ config file: +You can specify the following `output.file` options in the +{beatname_lc}.yml+ config file: ===== `enabled` The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. -The default value is true. +The default value is `true`. [[path]] ===== `path` diff --git a/libbeat/outputs/kafka/docs/kafka.asciidoc b/libbeat/outputs/kafka/docs/kafka.asciidoc index ce09e019ad8d..48e79a4fd518 100644 --- a/libbeat/outputs/kafka/docs/kafka.asciidoc +++ b/libbeat/outputs/kafka/docs/kafka.asciidoc @@ -5,7 +5,11 @@ Kafka ++++ -The Kafka output sends the events to Apache Kafka. +The Kafka output sends events to Apache Kafka. + +To use this output, edit the {beatname_uc} configuration file to disable the {es} +output by commenting it out, and enable the Kafka output by uncommenting the +Kafka section. Example configuration: @@ -42,7 +46,12 @@ You can specify the following options in the `kafka` section of the +{beatname_l The `enabled` config is a boolean setting to enable or disable the output. If set to false, the output is disabled. -The default value is true. +ifndef::apm-server[] +The default value is `true`. +endif::[] +ifdef::apm-server[] +The default value is `false`. +endif::[] ===== `hosts` diff --git a/libbeat/outputs/logstash/async.go b/libbeat/outputs/logstash/async.go index 96374e192d08..967ae7d0f6c1 100644 --- a/libbeat/outputs/logstash/async.go +++ b/libbeat/outputs/logstash/async.go @@ -18,7 +18,9 @@ package logstash import ( + "errors" "net" + "sync" "time" "github.com/elastic/beats/libbeat/beat" @@ -37,6 +39,8 @@ type asyncClient struct { win *window connect func() error + + mutex sync.Mutex } type msgRef struct { @@ -113,7 +117,11 @@ func (c *asyncClient) Connect() error { } func (c *asyncClient) Close() error { + c.mutex.Lock() + defer c.mutex.Unlock() + logp.Debug("logstash", "close connection") + if c.client != nil { err := c.client.Close() c.client = nil @@ -197,12 +205,23 @@ func (c *asyncClient) publishWindowed( } func (c *asyncClient) sendEvents(ref *msgRef, events []publisher.Event) error { + client := c.getClient() + if client == nil { + return errors.New("connection closed") + } window := make([]interface{}, len(events)) for i := range events { window[i] = &events[i].Content } ref.count.Inc() - return c.client.Send(ref.callback, window) + return client.Send(ref.callback, window) +} + +func (c *asyncClient) getClient() *v2.AsyncClient { + c.mutex.Lock() + client := c.client + c.mutex.Unlock() + return client } func (r *msgRef) callback(seq uint32, err error) { diff --git a/libbeat/outputs/logstash/docs/logstash.asciidoc b/libbeat/outputs/logstash/docs/logstash.asciidoc index be90bb4e0295..30ed5c2d1055 100644 --- a/libbeat/outputs/logstash/docs/logstash.asciidoc +++ b/libbeat/outputs/logstash/docs/logstash.asciidoc @@ -24,7 +24,7 @@ the {stack} getting started tutorial. Also see the documentation for the If you want to use {ls} to perform additional processing on the data collected by {beatname_uc}, you need to configure {beatname_uc} to use {ls}. -To do this, you edit the {beatname_uc} configuration file to disable the {es} +To do this, edit the {beatname_uc} configuration file to disable the {es} output by commenting it out and enable the {ls} output by uncommenting the logstash section: @@ -224,7 +224,12 @@ You can specify the following options in the `logstash` section of the The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. +ifndef::apm-server[] The default value is `true`. +endif::[] +ifdef::apm-server[] +The default value is `false`. +endif::[] [[hosts]] ===== `hosts` diff --git a/libbeat/outputs/redis/docs/redis.asciidoc b/libbeat/outputs/redis/docs/redis.asciidoc index 0d7dde88bfce..e5fdd29eb6b1 100644 --- a/libbeat/outputs/redis/docs/redis.asciidoc +++ b/libbeat/outputs/redis/docs/redis.asciidoc @@ -11,6 +11,9 @@ The Redis output inserts the events into a Redis list or a Redis channel. This output plugin is compatible with the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html[Redis input plugin] for Logstash. +To use this output, edit the {beatname_uc} configuration file to disable the {es} +output by commenting it out, and enable the Redis output by adding `output.redis`. + Example configuration: ["source","yaml",subs="attributes"] @@ -29,14 +32,14 @@ This output works with Redis 3.2.4. ==== Configuration options -You can specify the following options in the `redis` section of the +{beatname_lc}.yml+ config file: +You can specify the following `output.redis` options in the +{beatname_lc}.yml+ config file: ===== `enabled` The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. -The default value is true. +The default value is `true`. ===== `hosts` @@ -143,24 +146,6 @@ Output codec configuration. If the `codec` section is missing, events will be js See <> for more information. -===== `host_topology` - -deprecated:[5.0.0] - -The Redis host to connect to when using topology map support. Topology map support is disabled if this option is not set. - -===== `password_topology` - -deprecated:[5.0.0] - -The password to use for authenticating with the Redis topology server. The default is no authentication. - -===== `db_topology` - -deprecated:[5.0.0] - -The Redis database number where the topology information is stored. The default is 1. - ===== `worker` The number of workers to use for each host configured to publish events to Redis. Use this setting along with the diff --git a/libbeat/processors/convert/convert.go b/libbeat/processors/convert/convert.go index fbc6b5b30146..887fbdd02a9f 100644 --- a/libbeat/processors/convert/convert.go +++ b/libbeat/processors/convert/convert.go @@ -205,7 +205,7 @@ func toString(value interface{}) (string, error) { func toLong(value interface{}) (int64, error) { switch v := value.(type) { case string: - return strconv.ParseInt(v, 0, 64) + return strToInt(v, 64) case int: return int64(v), nil case int8: @@ -238,7 +238,7 @@ func toLong(value interface{}) (int64, error) { func toInteger(value interface{}) (int32, error) { switch v := value.(type) { case string: - i, err := strconv.ParseInt(v, 0, 32) + i, err := strToInt(v, 32) return int32(i), err case int: return int32(v), nil @@ -403,3 +403,24 @@ func cloneValue(value interface{}) interface{} { return value } } + +// strToInt is a helper to interpret a string as either base 10 or base 16. +func strToInt(s string, bitSize int) (int64, error) { + base := 10 + if hasHexPrefix(s) { + // strconv.ParseInt will accept the '0x' or '0X` prefix only when base is 0. + base = 0 + } + return strconv.ParseInt(s, base, bitSize) +} + +func hasHexPrefix(s string) bool { + if len(s) < 3 { + return false + } + a, b := s[0], s[1] + if a == '+' || a == '-' { + a, b = b, s[2] + } + return a == '0' && (b == 'x' || b == 'X') +} diff --git a/libbeat/processors/convert/convert_test.go b/libbeat/processors/convert/convert_test.go index 2469fe1848c9..141fafc0f8fe 100644 --- a/libbeat/processors/convert/convert_test.go +++ b/libbeat/processors/convert/convert_test.go @@ -276,8 +276,16 @@ var testCases = []testCase{ {Long, nil, nil, true}, {Long, "x", nil, true}, + {Long, "0x", nil, true}, + {Long, "0b1", nil, true}, + {Long, "1x2", nil, true}, {Long, true, nil, true}, {Long, "1", int64(1), false}, + {Long, "-1", int64(-1), false}, + {Long, "017", int64(17), false}, + {Long, "08", int64(8), false}, + {Long, "0X0A", int64(10), false}, + {Long, "-0x12", int64(-18), false}, {Long, int(1), int64(1), false}, {Long, int8(1), int64(1), false}, {Long, int16(1), int64(1), false}, @@ -294,6 +302,17 @@ var testCases = []testCase{ {Integer, nil, nil, true}, {Integer, "x", nil, true}, {Integer, true, nil, true}, + {Integer, "x", nil, true}, + {Integer, "0x", nil, true}, + {Integer, "0b1", nil, true}, + {Integer, "1x2", nil, true}, + {Integer, true, nil, true}, + {Integer, "1", int32(1), false}, + {Integer, "-1", int32(-1), false}, + {Integer, "017", int32(17), false}, + {Integer, "08", int32(8), false}, + {Integer, "0X0A", int32(10), false}, + {Integer, "-0x12", int32(-18), false}, {Integer, "1", int32(1), false}, {Integer, int(1), int32(1), false}, {Integer, int8(1), int32(1), false}, diff --git a/libbeat/processors/decode_csv_fields/docs/decode_csv_fields.asciidoc b/libbeat/processors/decode_csv_fields/docs/decode_csv_fields.asciidoc index c402bae48c59..718f9551bd2a 100644 --- a/libbeat/processors/decode_csv_fields/docs/decode_csv_fields.asciidoc +++ b/libbeat/processors/decode_csv_fields/docs/decode_csv_fields.asciidoc @@ -13,7 +13,7 @@ processors: - decode_csv_fields: fields: message: decoded.csv - separator: , + separator: "," ignore_missing: false overwrite_keys: true trim_leading_space: false diff --git a/libbeat/publisher/queue/spool/codec_test.go b/libbeat/publisher/queue/spool/codec_test.go new file mode 100644 index 000000000000..3588d3e2f21c --- /dev/null +++ b/libbeat/publisher/queue/spool/codec_test.go @@ -0,0 +1,76 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package spool + +import ( + "testing" + "time" + + "github.com/stretchr/testify/assert" + + "github.com/elastic/beats/libbeat/beat" + "github.com/elastic/beats/libbeat/common" + "github.com/elastic/beats/libbeat/publisher" +) + +func TestEncodeDecode(t *testing.T) { + tests := map[string]codecID{ + "json": codecJSON, + "ubjson": codecUBJSON, + "cborl": codecCBORL, + } + + fieldTimeStr := "2020-01-14T20:33:23.779Z" + fieldTime, _ := time.Parse(time.RFC3339Nano, fieldTimeStr) + event := publisher.Event{ + Content: beat.Event{ + Timestamp: time.Now().Round(0), + Fields: common.MapStr{ + "time": fieldTime, + "commontime": common.Time(fieldTime), + }, + }, + } + expected := publisher.Event{ + Content: beat.Event{ + Timestamp: event.Content.Timestamp, + Fields: common.MapStr{ + "time": fieldTime.Format(time.RFC3339Nano), + "commontime": common.Time(fieldTime).String(), + }, + }, + } + + for name, codec := range tests { + t.Run(name, func(t *testing.T) { + encoder, err := newEncoder(codec) + assert.NoError(t, err) + + encoded, err := encoder.encode(&event) + assert.NoError(t, err) + + decoder := newDecoder() + decoder.buf = encoded + + observed, err := decoder.Decode() + assert.NoError(t, err) + + assert.Equal(t, expected, observed) + }) + } +} diff --git a/metricbeat/autodiscover/builder/hints/metrics.go b/metricbeat/autodiscover/builder/hints/metrics.go index c10b39967c4a..247d30468604 100644 --- a/metricbeat/autodiscover/builder/hints/metrics.go +++ b/metricbeat/autodiscover/builder/hints/metrics.go @@ -141,14 +141,14 @@ func (m *metricHints) CreateConfig(event bus.Event) []*common.Config { moduleConfig["password"] = password } - logp.Debug("hints.builder", "generated config: %v", moduleConfig.String()) + logp.Debug("hints.builder", "generated config: %v", moduleConfig) // Create config object cfg, err := common.NewConfigFrom(moduleConfig) if err != nil { logp.Debug("hints.builder", "config merge failed with error: %v", err) } - logp.Debug("hints.builder", "generated config: +%v", *cfg) + logp.Debug("hints.builder", "generated config: %+v", common.DebugString(cfg, true)) config = append(config, cfg) // Apply information in event to the template to generate the final config diff --git a/metricbeat/docs/fields.asciidoc b/metricbeat/docs/fields.asciidoc index e8ba36e7e765..e86c0975b174 100644 --- a/metricbeat/docs/fields.asciidoc +++ b/metricbeat/docs/fields.asciidoc @@ -41,6 +41,8 @@ grouped in the following categories: * <> * <> * <> +* <> +* <> * <> * <> * <> @@ -1737,6 +1739,12 @@ Metrics that returned from Cloudwatch API query. - name: billing `elb` contains the metrics that were scraped from AWS CloudWatch which contains monitoring metrics sent by AWS ELB. release: ga fields: +- name: lambda + type: group + description: > + `lambda` contains the metrics that were scraped from AWS CloudWatch which contains monitoring metrics sent by AWS Lambda. + release: beta + fields: - name: rds type: group description: > @@ -15846,6 +15854,327 @@ json metricset server +[[exported-fields-ibmmq]] +== IBM MQ fields + +IBM MQ module + + + + +[[exported-fields-istio]] +== istio fields + +istio Module + + + +[float] +=== istio + +`istio` contains statistics that were read from Istio + + + +[float] +=== mesh + +Contains statistics related to the Istio mesh service + + + +*`istio.mesh.instance`*:: ++ +-- +The prometheus instance + + +type: text + +-- + +*`istio.mesh.job`*:: ++ +-- +The prometheus job + + +type: keyword + +-- + +*`istio.mesh.requests`*:: ++ +-- +Total requests handled by an Istio proxy + + +type: long + +-- + +*`istio.mesh.request.duration.ms.bucket.*`*:: ++ +-- +Request duration histogram buckets in milliseconds + + +type: object + +-- + +*`istio.mesh.request.duration.ms.sum`*:: ++ +-- +Requests duration, sum of durations in milliseconds + + +type: long + +format: duration + +-- + +*`istio.mesh.request.duration.ms.count`*:: ++ +-- +Requests duration, number of requests + + +type: long + +-- + +*`istio.mesh.request.size.bytes.bucket.*`*:: ++ +-- +Request Size histogram buckets + + +type: object + +-- + +*`istio.mesh.request.size.bytes.sum`*:: ++ +-- +Request Size histogram sum + + +type: long + +-- + +*`istio.mesh.request.size.bytes.count`*:: ++ +-- +Request Size histogram count + + +type: long + +-- + +*`istio.mesh.response.size.bytes.bucket.*`*:: ++ +-- +Request Size histogram buckets + + +type: object + +-- + +*`istio.mesh.response.size.bytes.sum`*:: ++ +-- +Request Size histogram sum + + +type: long + +-- + +*`istio.mesh.response.size.bytes.count`*:: ++ +-- +Request Size histogram count + + +type: long + +-- + +*`istio.mesh.reporter`*:: ++ +-- +Reporter identifies the reporter of the request. It is set to destination if report is from a server Istio proxy and source if report is from a client Istio proxy. + + +type: keyword + +-- + +*`istio.mesh.source.workload.name`*:: ++ +-- +This identifies the name of source workload which controls the source. + + +type: keyword + +-- + +*`istio.mesh.source.workload.namespace`*:: ++ +-- +This identifies the namespace of the source workload. + + +type: keyword + +-- + +*`istio.mesh.source.principal`*:: ++ +-- +This identifies the peer principal of the traffic source. It is set when peer authentication is used. + + +type: keyword + +-- + +*`istio.mesh.source.app`*:: ++ +-- +This identifies the source app based on app label of the source workload. + + +type: keyword + +-- + +*`istio.mesh.source.version`*:: ++ +-- +This identifies the version of the source workload. + + +type: keyword + +-- + +*`istio.mesh.destination.workload.name`*:: ++ +-- +This identifies the name of destination workload. + + +type: keyword + +-- + +*`istio.mesh.destination.workload.namespace`*:: ++ +-- +This identifies the namespace of the destination workload. + + +type: keyword + +-- + +*`istio.mesh.destination.principal`*:: ++ +-- +This identifies the peer principal of the traffic destination. It is set when peer authentication is used. + + +type: keyword + +-- + +*`istio.mesh.destination.app`*:: ++ +-- +This identifies the destination app based on app label of the destination workload.. + + +type: keyword + +-- + +*`istio.mesh.destination.version`*:: ++ +-- +This identifies the version of the destination workload. + + +type: keyword + +-- + +*`istio.mesh.destination.service.host`*:: ++ +-- +This identifies destination service host responsible for an incoming request. + + +type: keyword + +-- + +*`istio.mesh.destination.service.name`*:: ++ +-- +This identifies the destination service name. + + +type: keyword + +-- + +*`istio.mesh.destination.service.namespace`*:: ++ +-- +This identifies the namespace of destination service. + + +type: keyword + +-- + +*`istio.mesh.request.protocol`*:: ++ +-- +This identifies the protocol of the request. It is set to API protocol if provided, otherwise request or connection protocol. + + +type: keyword + +-- + +*`istio.mesh.response.code`*:: ++ +-- +This identifies the response code of the request. This label is present only on HTTP metrics. + + +type: long + +-- + +*`istio.mesh.connection.security.policy`*:: ++ +-- +This identifies the service authentication policy of the request. It is set to mutual_tls when Istio is used to make communication secure and report is from destination. It is set to unknown when report is from source since security policy cannot be properly populated. + + +type: keyword + +-- + [[exported-fields-jolokia]] == Jolokia fields @@ -20240,15 +20569,6 @@ type: ip -- -*`kubernetes.service.labels.*`*:: -+ --- -Labels for service - -type: object - --- - *`kubernetes.service.created`*:: + -- diff --git a/metricbeat/docs/images/metricbeat-aws-lambda-overview.png b/metricbeat/docs/images/metricbeat-aws-lambda-overview.png new file mode 100644 index 000000000000..84a228b51e39 Binary files /dev/null and b/metricbeat/docs/images/metricbeat-aws-lambda-overview.png differ diff --git a/metricbeat/docs/images/metricbeat-ibmmq-calls.png b/metricbeat/docs/images/metricbeat-ibmmq-calls.png new file mode 100644 index 000000000000..27e09c4c6ea3 Binary files /dev/null and b/metricbeat/docs/images/metricbeat-ibmmq-calls.png differ diff --git a/metricbeat/docs/images/metricbeat-ibmmq-messages.png b/metricbeat/docs/images/metricbeat-ibmmq-messages.png new file mode 100644 index 000000000000..b20360674ae0 Binary files /dev/null and b/metricbeat/docs/images/metricbeat-ibmmq-messages.png differ diff --git a/metricbeat/docs/images/metricbeat-ibmmq-subscriptions.png b/metricbeat/docs/images/metricbeat-ibmmq-subscriptions.png new file mode 100644 index 000000000000..44c8f14a9003 Binary files /dev/null and b/metricbeat/docs/images/metricbeat-ibmmq-subscriptions.png differ diff --git a/metricbeat/docs/modules/aws.asciidoc b/metricbeat/docs/modules/aws.asciidoc index d03f2ce4a557..5f25ab0e1df2 100644 --- a/metricbeat/docs/modules/aws.asciidoc +++ b/metricbeat/docs/modules/aws.asciidoc @@ -12,7 +12,7 @@ This module periodically fetches monitoring metrics from AWS CloudWatch using https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html[GetMetricData API] for AWS services. Note: extra AWS charges on GetMetricData API requests will be generated by this module. -The default metricsets are `ec2`, `sqs`, `s3_request`, `s3_daily_storage`, `cloudwatch` and `rds`. +All metrics are enabled by default. [float] == Module-specific configuration notes @@ -34,8 +34,12 @@ image::./images/metricbeat-aws-overview.png[] [float] == Metricsets -Currently, we have `ec2`, `sqs`, `s3_request`, `s3_daily_storage`, `cloudwatch`, `billing`,`ebs`, `elb`, `rds`, `sns`, `sqs` and `usage` metricset in `aws` module. -Collecting `tags` for `ec2`, `cloudwatch`, `ebs` and `elb` metricset is supported. +Currently, we have `billing`, `cloudwatch`, `dynamodb`, `ebs`, `ec2`, `elb`, +`lambda`, `rds`, `s3_daily_storage`, `s3_request`, `sns`, `sqs` and `usage` +metricset in `aws` module. + +Collecting `tags` for `ec2`, `cloudwatch`, and metricset created based on +`cloudwatch` using light module is supported. * *tags.*: Tag key value pairs from aws resources. A tag is a label that user assigns to an AWS resource. @@ -113,7 +117,7 @@ GetMetricData max page size: 100, based on https://docs.aws.amazon.com/AmazonClo | CloudWatch ListMetrics | Total number of results / ListMetrics max page size | Per region per namespace per collection period | CloudWatch GetMetricData | Total number of results / GetMetricData max page size | Per region per namespace per collection period |=== -`billing`, `ebs`, `elb`, `sns` and `usage` are the same as `cloudwatch` metricset. +`billing`, `ebs`, `elb`, `sns`, `usage` and `lambda` are the same as `cloudwatch` metricset. [float] === `ec2` @@ -232,6 +236,8 @@ The following metricsets are available: * <> +* <> + * <> * <> @@ -256,6 +262,8 @@ include::aws/ec2.asciidoc[] include::aws/elb.asciidoc[] +include::aws/lambda.asciidoc[] + include::aws/rds.asciidoc[] include::aws/s3_daily_storage.asciidoc[] diff --git a/metricbeat/docs/modules/aws/lambda.asciidoc b/metricbeat/docs/modules/aws/lambda.asciidoc new file mode 100644 index 000000000000..9afe68262cae --- /dev/null +++ b/metricbeat/docs/modules/aws/lambda.asciidoc @@ -0,0 +1,24 @@ +//// +This file is generated! See scripts/mage/docs_collector.go +//// + +[[metricbeat-metricset-aws-lambda]] +=== aws lambda metricset + +beta[] + +include::../../../../x-pack/metricbeat/module/aws/lambda/_meta/docs.asciidoc[] + +This is a default metricset. If the host module is unconfigured, this metricset is enabled by default. + +==== Fields + +For a description of each field in the metricset, see the +<> section. + +Here is an example document generated by this metricset: + +[source,json] +---- +include::../../../../x-pack/metricbeat/module/aws/lambda/_meta/data.json[] +---- diff --git a/metricbeat/docs/modules/ibmmq.asciidoc b/metricbeat/docs/modules/ibmmq.asciidoc new file mode 100644 index 000000000000..4912139f9dc1 --- /dev/null +++ b/metricbeat/docs/modules/ibmmq.asciidoc @@ -0,0 +1,90 @@ +//// +This file is generated! See scripts/mage/docs_collector.go +//// + +[[metricbeat-module-ibmmq]] +[role="xpack"] +== IBM MQ module + +beta[] + +This module periodically fetches metrics from a containerized distribution of IBM MQ. + +[float] +=== Compatibility + +The ibmmq `qmgr` metricset is compatible with a containerized distribution of IBM MQ (since version 9.1.0). +The Docker image starts the `runmqserver` process, which spawns the HTTP server exposing metrics in Prometheus +format ([source code](https://github.com/ibm-messaging/mq-container/blob/9.1.0/internal/metrics/metrics.go)). + +The Docker container lifecycle, including metrics collection, has been described in the [Internals](https://github.com/ibm-messaging/mq-container/blob/9.1.0/docs/internals.md) +document. + +The image provides an option to easily enable metrics exporter using an environment +variable: + +`MQ_ENABLE_METRICS` - Set this to `true` to generate Prometheus metrics for the Queue Manager. + +[float] +=== Dashboard + +The ibmmq module includes predefined dashboards with overview information +of the monitored Queue Manager, including subscriptions, calls and messages. + +image::./images/metricbeat-ibmmq-calls.png[] + +image::./images/metricbeat-ibmmq-messages.png[] + +image::./images/metricbeat-ibmmq-subscriptions.png[] + + +[float] +=== Example configuration + +The IBM MQ module supports the standard configuration options that are described +in <>. Here is an example configuration: + +[source,yaml] +---- +metricbeat.modules: +- module: ibmmq + metricsets: ['qmgr'] + period: 10s + hosts: ['localhost:9157'] + + # This module uses the Prometheus collector metricset, all + # the options for this metricset are also available here. + metrics_path: /metrics + + # The custom processor is responsible for filtering Prometheus metrics + # not stricly related to the IBM MQ domain, e.g. system load, process, + # metrics HTTP server. + processors: + - script: + lang: javascript + source: > + function process(event) { + var metrics = event.Get("prometheus.metrics"); + Object.keys(metrics).forEach(function(key) { + if (!(key.match(/^ibmmq_.*$/))) { + event.Delete("prometheus.metrics." + key); + } + }); + metrics = event.Get("prometheus.metrics"); + if (Object.keys(metrics).length == 0) { + event.Cancel(); + } + } +---- + +It also supports the options described in <>. + +[float] +=== Metricsets + +The following metricsets are available: + +* <> + +include::ibmmq/qmgr.asciidoc[] + diff --git a/metricbeat/docs/modules/ibmmq/qmgr.asciidoc b/metricbeat/docs/modules/ibmmq/qmgr.asciidoc new file mode 100644 index 000000000000..7617b660ad6f --- /dev/null +++ b/metricbeat/docs/modules/ibmmq/qmgr.asciidoc @@ -0,0 +1,24 @@ +//// +This file is generated! See scripts/mage/docs_collector.go +//// + +[[metricbeat-metricset-ibmmq-qmgr]] +=== IBM MQ qmgr metricset + +beta[] + +include::../../../../x-pack/metricbeat/module/ibmmq/qmgr/_meta/docs.asciidoc[] + +This is a default metricset. If the host module is unconfigured, this metricset is enabled by default. + +==== Fields + +For a description of each field in the metricset, see the +<> section. + +Here is an example document generated by this metricset: + +[source,json] +---- +include::../../../../x-pack/metricbeat/module/ibmmq/qmgr/_meta/data.json[] +---- diff --git a/metricbeat/docs/modules/istio.asciidoc b/metricbeat/docs/modules/istio.asciidoc new file mode 100644 index 000000000000..8d9cd5fe0066 --- /dev/null +++ b/metricbeat/docs/modules/istio.asciidoc @@ -0,0 +1,45 @@ +//// +This file is generated! See scripts/mage/docs_collector.go +//// + +[[metricbeat-module-istio]] +[role="xpack"] +== istio module + +beta[] + +This is the Istio module. The Istio module collects metrics from the +Istio https://istio.io/docs/tasks/observability/metrics/querying-metrics/#about-the-prometheus-add-on[prometheus exporters endpoints]. + +The default metricset is `mesh`. + +[float] +=== Compatibility + +The Istio module is tested with Istio 1.4 + + +[float] +=== Example configuration + +The istio module supports the standard configuration options that are described +in <>. Here is an example configuration: + +[source,yaml] +---- +metricbeat.modules: +- module: istio + metricsets: ["mesh"] + period: 10s + hosts: ["localhost:42422"] +---- + +[float] +=== Metricsets + +The following metricsets are available: + +* <> + +include::istio/mesh.asciidoc[] + diff --git a/metricbeat/docs/modules/istio/mesh.asciidoc b/metricbeat/docs/modules/istio/mesh.asciidoc new file mode 100644 index 000000000000..33ca6d199656 --- /dev/null +++ b/metricbeat/docs/modules/istio/mesh.asciidoc @@ -0,0 +1,23 @@ +//// +This file is generated! See scripts/mage/docs_collector.go +//// + +[[metricbeat-metricset-istio-mesh]] +=== istio mesh metricset + +beta[] + +include::../../../../x-pack/metricbeat/module/istio/mesh/_meta/docs.asciidoc[] + + +==== Fields + +For a description of each field in the metricset, see the +<> section. + +Here is an example document generated by this metricset: + +[source,json] +---- +include::../../../../x-pack/metricbeat/module/istio/mesh/_meta/data.json[] +---- diff --git a/metricbeat/docs/modules/sql.asciidoc b/metricbeat/docs/modules/sql.asciidoc index 30e5baf89da6..c3f3d412ea77 100644 --- a/metricbeat/docs/modules/sql.asciidoc +++ b/metricbeat/docs/modules/sql.asciidoc @@ -8,7 +8,7 @@ This file is generated! See scripts/mage/docs_collector.go beta[] -This is the sql module that fetches metrics from a SQL database. You can define driver, datasource and SQL query. +This is the sql module that fetches metrics from a SQL database. You can define driver and SQL query. @@ -26,10 +26,9 @@ metricbeat.modules: metricsets: - query period: 10s - hosts: ["localhost"] + hosts: ["user=myuser password=mypassword dbname=mydb sslmode=disable"] driver: "postgres" - datasource: "user=myuser password=mypassword dbname=mydb sslmode=disable" sql_query: "select now()" ---- diff --git a/metricbeat/docs/modules/system.asciidoc b/metricbeat/docs/modules/system.asciidoc index 5a9da32636d5..74c9c06a7940 100644 --- a/metricbeat/docs/modules/system.asciidoc +++ b/metricbeat/docs/modules/system.asciidoc @@ -172,12 +172,13 @@ metricbeat.modules: #- fsstat # File system summary metrics #- raid # Raid #- socket # Sockets and connection info (linux only) + #- service # systemd service information enabled: true period: 10s processes: ['.*'] # Configure the metric types that are included by these metricsets. - cpu.metrics: ["percentages"] # The other available options are normalized_percentages and ticks. + cpu.metrics: ["percentages","normalized_percentages"] # The other available option is ticks. core.metrics: ["percentages"] # The other available option is ticks. # A list of filesystem types to ignore. The filesystem metricset will not @@ -229,6 +230,9 @@ metricbeat.modules: # Diskio configurations #diskio.include_devices: [] + + # Filter systemd services by status or sub-status + #service.state_filter: [] ---- [float] diff --git a/metricbeat/docs/modules_list.asciidoc b/metricbeat/docs/modules_list.asciidoc index e3ba332daa0d..c07155249a33 100644 --- a/metricbeat/docs/modules_list.asciidoc +++ b/metricbeat/docs/modules_list.asciidoc @@ -16,12 +16,13 @@ This file is generated! See scripts/mage/docs_collector.go |<> beta[] |image:./images/icon-no.png[No prebuilt dashboards] | .1+| .1+| |<> beta[] |<> |image:./images/icon-yes.png[Prebuilt dashboards are available] | -.12+| .12+| |<> beta[] +.13+| .13+| |<> beta[] |<> |<> beta[] |<> |<> |<> +|<> beta[] |<> |<> |<> @@ -100,6 +101,10 @@ This file is generated! See scripts/mage/docs_collector.go |<> |image:./images/icon-no.png[No prebuilt dashboards] | .2+| .2+| |<> |<> +|<> beta[] |image:./images/icon-yes.png[Prebuilt dashboards are available] | +.1+| .1+| |<> beta[] +|<> beta[] |image:./images/icon-no.png[No prebuilt dashboards] | +.1+| .1+| |<> beta[] |<> |image:./images/icon-no.png[No prebuilt dashboards] | .1+| .1+| |<> |<> |image:./images/icon-yes.png[Prebuilt dashboards are available] | @@ -255,6 +260,8 @@ include::modules/googlecloud.asciidoc[] include::modules/graphite.asciidoc[] include::modules/haproxy.asciidoc[] include::modules/http.asciidoc[] +include::modules/ibmmq.asciidoc[] +include::modules/istio.asciidoc[] include::modules/jolokia.asciidoc[] include::modules/kafka.asciidoc[] include::modules/kibana.asciidoc[] diff --git a/metricbeat/module/windows/perfmon/defs_pdh_windows.go b/metricbeat/helper/windows/pdh/defs_pdh_windows.go similarity index 99% rename from metricbeat/module/windows/perfmon/defs_pdh_windows.go rename to metricbeat/helper/windows/pdh/defs_pdh_windows.go index 97f070a600a8..bcc62c4ffd1d 100644 --- a/metricbeat/module/windows/perfmon/defs_pdh_windows.go +++ b/metricbeat/helper/windows/pdh/defs_pdh_windows.go @@ -20,7 +20,7 @@ // +build ignore -package perfmon +package pdh /* #include diff --git a/metricbeat/module/windows/perfmon/defs_pdh_windows_386.go b/metricbeat/helper/windows/pdh/defs_pdh_windows_386.go similarity index 99% rename from metricbeat/module/windows/perfmon/defs_pdh_windows_386.go rename to metricbeat/helper/windows/pdh/defs_pdh_windows_386.go index 3995b8b9b011..e794050dcf55 100644 --- a/metricbeat/module/windows/perfmon/defs_pdh_windows_386.go +++ b/metricbeat/helper/windows/pdh/defs_pdh_windows_386.go @@ -18,7 +18,7 @@ // Code generated by cmd/cgo -godefs; DO NOT EDIT. // cgo.exe -godefs defs_pdh_windows.go -package perfmon +package pdh type PdhErrno uintptr diff --git a/metricbeat/module/windows/perfmon/defs_pdh_windows_amd64.go b/metricbeat/helper/windows/pdh/defs_pdh_windows_amd64.go similarity index 99% rename from metricbeat/module/windows/perfmon/defs_pdh_windows_amd64.go rename to metricbeat/helper/windows/pdh/defs_pdh_windows_amd64.go index 3995b8b9b011..e794050dcf55 100644 --- a/metricbeat/module/windows/perfmon/defs_pdh_windows_amd64.go +++ b/metricbeat/helper/windows/pdh/defs_pdh_windows_amd64.go @@ -18,7 +18,7 @@ // Code generated by cmd/cgo -godefs; DO NOT EDIT. // cgo.exe -godefs defs_pdh_windows.go -package perfmon +package pdh type PdhErrno uintptr diff --git a/metricbeat/module/windows/doc.go b/metricbeat/helper/windows/pdh/doc.go similarity index 62% rename from metricbeat/module/windows/doc.go rename to metricbeat/helper/windows/pdh/doc.go index 7068d9f2ef98..fc6ec0cd1326 100644 --- a/metricbeat/module/windows/doc.go +++ b/metricbeat/helper/windows/pdh/doc.go @@ -15,7 +15,10 @@ // specific language governing permissions and limitations // under the License. -/* -Package windows is a Metricbeat module that contains MetricSets. -*/ -package windows +package pdh + +//go:generate go run mkpdh_defs.go +//go:generate go run ../run.go -cmd "go tool cgo -godefs defs_pdh_windows.go" -goarch amd64 -output defs_pdh_windows_amd64.go +//go:generate go run ../run.go -cmd "go tool cgo -godefs defs_pdh_windows.go" -goarch 386 -output defs_pdh_windows_386.go +//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zpdh_windows.go pdh_windows.go +//go:generate gofmt -w defs_pdh_windows_amd64.go defs_pdh_windows_386.go zpdh_windows.go diff --git a/metricbeat/module/windows/perfmon/mkpdh_defs.go b/metricbeat/helper/windows/pdh/mkpdh_defs.go similarity index 99% rename from metricbeat/module/windows/perfmon/mkpdh_defs.go rename to metricbeat/helper/windows/pdh/mkpdh_defs.go index 1f30a1161fb4..fd71b98bb27f 100644 --- a/metricbeat/module/windows/perfmon/mkpdh_defs.go +++ b/metricbeat/helper/windows/pdh/mkpdh_defs.go @@ -48,7 +48,7 @@ const fileTemplate = ` // +build ignore -package perfmon +package pdh /* #include diff --git a/metricbeat/module/windows/perfmon/pdh_query_windows.go b/metricbeat/helper/windows/pdh/pdh_query_windows.go similarity index 89% rename from metricbeat/module/windows/perfmon/pdh_query_windows.go rename to metricbeat/helper/windows/pdh/pdh_query_windows.go index 5c5a7149d3ba..c7d8f0d82bf0 100644 --- a/metricbeat/module/windows/perfmon/pdh_query_windows.go +++ b/metricbeat/helper/windows/pdh/pdh_query_windows.go @@ -17,7 +17,7 @@ // +build windows -package perfmon +package pdh import ( "regexp" @@ -43,8 +43,8 @@ type Counter struct { // Query contains the pdh. type Query struct { - handle PdhQueryHandle - counters map[string]*Counter + Handle PdhQueryHandle + Counters map[string]*Counter } // CounterValue contains the performance counter values. @@ -60,42 +60,42 @@ func (q *Query) Open() error { if err != nil { return err } - q.handle = h - q.counters = make(map[string]*Counter) + q.Handle = h + q.Counters = make(map[string]*Counter) return nil } // AddEnglishCounter adds the specified counter to the query. func (q *Query) AddEnglishCounter(counterPath string) (PdhCounterHandle, error) { - h, err := PdhAddEnglishCounter(q.handle, counterPath, 0) + h, err := PdhAddEnglishCounter(q.Handle, counterPath, 0) return h, err } // AddCounter adds the specified counter to the query. -func (q *Query) AddCounter(counterPath string, counter CounterConfig, wildcard bool) error { - if _, found := q.counters[counterPath]; found { +func (q *Query) AddCounter(counterPath string, instance string, format string, wildcard bool) error { + if _, found := q.Counters[counterPath]; found { return nil } var err error var instanceName string // Extract the instance name from the counterPath. - if counter.InstanceName == "" || wildcard { + if instance == "" || wildcard { instanceName, err = matchInstanceName(counterPath) if err != nil { return err } } else { - instanceName = counter.InstanceName + instanceName = instance } - h, err := PdhAddCounter(q.handle, counterPath, 0) + h, err := PdhAddCounter(q.Handle, counterPath, 0) if err != nil { return err } - q.counters[counterPath] = &Counter{ + q.Counters[counterPath] = &Counter{ handle: h, instanceName: instanceName, - format: getPDHFormat(counter.Format), + format: getPDHFormat(format), } return nil } @@ -134,7 +134,7 @@ func (q *Query) RemoveUnusedCounters(counters []string) error { } } unused := make(map[string]*Counter) - for counterPath, counter := range q.counters { + for counterPath, counter := range q.Counters { if !matchCounter(counterPath, counters) { unused[counterPath] = counter } @@ -147,7 +147,7 @@ func (q *Query) RemoveUnusedCounters(counters []string) error { if err != nil { return err } - delete(q.counters, counterPath) + delete(q.Counters, counterPath) } return nil } @@ -163,16 +163,16 @@ func matchCounter(counterPath string, counterList []string) bool { // CollectData collects the value for all counters in the query. func (q *Query) CollectData() error { - return PdhCollectQueryData(q.handle) + return PdhCollectQueryData(q.Handle) } // GetFormattedCounterValues returns an array of formatted values for a query. func (q *Query) GetFormattedCounterValues() (map[string][]CounterValue, error) { - if q.counters == nil || len(q.counters) == 0 { + if q.Counters == nil || len(q.Counters) == 0 { return nil, errors.New("no counter list found") } - rtn := make(map[string][]CounterValue, len(q.counters)) - for path, counter := range q.counters { + rtn := make(map[string][]CounterValue, len(q.Counters)) + for path, counter := range q.Counters { rtn[path] = append(rtn[path], getCounterValue(counter)) } return rtn, nil @@ -206,7 +206,7 @@ func (q *Query) ExpandWildCardPath(wildCardPath string) ([]string, error) { // Close closes the query and all of its counters. func (q *Query) Close() error { - return PdhCloseQuery(q.handle) + return PdhCloseQuery(q.Handle) } // matchInstanceName will check first for instance and then for any objects names. diff --git a/metricbeat/module/windows/perfmon/pdh_query_windows_test.go b/metricbeat/helper/windows/pdh/pdh_query_windows_test.go similarity index 94% rename from metricbeat/module/windows/perfmon/pdh_query_windows_test.go rename to metricbeat/helper/windows/pdh/pdh_query_windows_test.go index 4e45ec827180..2b5038e42c4b 100644 --- a/metricbeat/module/windows/perfmon/pdh_query_windows_test.go +++ b/metricbeat/helper/windows/pdh/pdh_query_windows_test.go @@ -15,7 +15,7 @@ // specific language governing permissions and limitations // under the License. -package perfmon +package pdh import ( "syscall" @@ -38,8 +38,7 @@ func TestAddCounterInvalidArgWhenQueryClosed(t *testing.T) { queryPath, err := q.GetCounterPaths(validQuery) // if windows os language is ENG then err will be nil, else the GetCounterPaths will execute the AddCounter if assert.NoError(t, err) { - counter := CounterConfig{Format: "float", InstanceName: "TestInstanceName"} - err = q.AddCounter(queryPath[0], counter, false) + err = q.AddCounter(queryPath[0], "TestInstanceName", "float", false) assert.Error(t, err, PDH_INVALID_HANDLE) } else { assert.Error(t, err, PDH_INVALID_ARGUMENT) @@ -70,12 +69,11 @@ func TestSuccessfulQuery(t *testing.T) { t.Fatal(err) } defer q.Close() - counter := CounterConfig{Format: "float", InstanceName: "TestInstanceName"} queryPath, err := q.GetCounterPaths(validQuery) if err != nil { t.Fatal(err) } - err = q.AddCounter(queryPath[0], counter, false) + err = q.AddCounter(queryPath[0], "TestInstanceName", "floar", false) if err != nil { t.Fatal(err) } diff --git a/metricbeat/module/windows/perfmon/pdh_windows.go b/metricbeat/helper/windows/pdh/pdh_windows.go similarity index 99% rename from metricbeat/module/windows/perfmon/pdh_windows.go rename to metricbeat/helper/windows/pdh/pdh_windows.go index b817ddaec161..bccca2c5784d 100644 --- a/metricbeat/module/windows/perfmon/pdh_windows.go +++ b/metricbeat/helper/windows/pdh/pdh_windows.go @@ -17,7 +17,7 @@ // +build windows -package perfmon +package pdh import ( "strconv" diff --git a/metricbeat/module/windows/perfmon/pdh_windows_test.go b/metricbeat/helper/windows/pdh/pdh_windows_test.go similarity index 99% rename from metricbeat/module/windows/perfmon/pdh_windows_test.go rename to metricbeat/helper/windows/pdh/pdh_windows_test.go index be08eac32d21..c17a68c31c3c 100644 --- a/metricbeat/module/windows/perfmon/pdh_windows_test.go +++ b/metricbeat/helper/windows/pdh/pdh_windows_test.go @@ -15,7 +15,7 @@ // specific language governing permissions and limitations // under the License. -package perfmon +package pdh import ( "syscall" diff --git a/metricbeat/module/windows/perfmon/zpdh_windows.go b/metricbeat/helper/windows/pdh/zpdh_windows.go similarity index 99% rename from metricbeat/module/windows/perfmon/zpdh_windows.go rename to metricbeat/helper/windows/pdh/zpdh_windows.go index 85cb93dcc758..8d2891b73cba 100644 --- a/metricbeat/module/windows/perfmon/zpdh_windows.go +++ b/metricbeat/helper/windows/pdh/zpdh_windows.go @@ -17,7 +17,7 @@ // Code generated by 'go generate'; DO NOT EDIT. -package perfmon +package pdh import ( "syscall" diff --git a/metricbeat/module/windows/run.go b/metricbeat/helper/windows/run.go similarity index 100% rename from metricbeat/module/windows/run.go rename to metricbeat/helper/windows/run.go diff --git a/metricbeat/mb/testing/fetcher.go b/metricbeat/mb/testing/fetcher.go index be8264b5cbb5..5b01e1b81382 100644 --- a/metricbeat/mb/testing/fetcher.go +++ b/metricbeat/mb/testing/fetcher.go @@ -20,6 +20,7 @@ package testing import ( "testing" + "github.com/elastic/beats/libbeat/beat" "github.com/elastic/beats/libbeat/common" "github.com/elastic/beats/metricbeat/mb" ) @@ -32,6 +33,7 @@ type Fetcher interface { FetchEvents() ([]mb.Event, []error) WriteEvents(testing.TB, string) WriteEventsCond(testing.TB, string, func(common.MapStr) bool) + StandardizeEvent(mb.Event, ...mb.EventModifier) beat.Event } // NewFetcher returns a test fetcher from a Metricset configuration @@ -73,6 +75,10 @@ func (f *reportingMetricSetV2Fetcher) WriteEventsCond(t testing.TB, path string, } } +func (f *reportingMetricSetV2Fetcher) StandardizeEvent(event mb.Event, modifiers ...mb.EventModifier) beat.Event { + return StandardizeEvent(f, event, modifiers...) +} + type reportingMetricSetV2FetcherError struct { mb.ReportingMetricSetV2Error } @@ -96,6 +102,10 @@ func (f *reportingMetricSetV2FetcherError) WriteEventsCond(t testing.TB, path st } } +func (f *reportingMetricSetV2FetcherError) StandardizeEvent(event mb.Event, modifiers ...mb.EventModifier) beat.Event { + return StandardizeEvent(f, event, modifiers...) +} + type reportingMetricSetV2FetcherWithContext struct { mb.ReportingMetricSetV2WithContext } @@ -118,3 +128,7 @@ func (f *reportingMetricSetV2FetcherWithContext) WriteEventsCond(t testing.TB, p t.Fatal("writing events", err) } } + +func (f *reportingMetricSetV2FetcherWithContext) StandardizeEvent(event mb.Event, modifiers ...mb.EventModifier) beat.Event { + return StandardizeEvent(f, event, modifiers...) +} diff --git a/metricbeat/metricbeat.reference.yml b/metricbeat/metricbeat.reference.yml index 0ced4f40cd61..4a8dea2095bd 100644 --- a/metricbeat/metricbeat.reference.yml +++ b/metricbeat/metricbeat.reference.yml @@ -73,12 +73,13 @@ metricbeat.modules: #- fsstat # File system summary metrics #- raid # Raid #- socket # Sockets and connection info (linux only) + #- service # systemd service information enabled: true period: 10s processes: ['.*'] # Configure the metric types that are included by these metricsets. - cpu.metrics: ["percentages"] # The other available options are normalized_percentages and ticks. + cpu.metrics: ["percentages","normalized_percentages"] # The other available option is ticks. core.metrics: ["percentages"] # The other available option is ticks. # A list of filesystem types to ignore. The filesystem metricset will not @@ -131,6 +132,9 @@ metricbeat.modules: # Diskio configurations #diskio.include_devices: [] + # Filter systemd services by status or sub-status + #service.state_filter: [] + #------------------------------ Aerospike Module ------------------------------ - module: aerospike metricsets: ["namespace"] @@ -2020,6 +2024,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/metricbeat/module/docker/diskio/helper.go b/metricbeat/module/docker/diskio/helper.go index 17895944e28e..e75c21b886fb 100644 --- a/metricbeat/module/docker/diskio/helper.go +++ b/metricbeat/module/docker/diskio/helper.go @@ -174,5 +174,11 @@ func calculatePerSecond(duration time.Duration, old uint64, new uint64) float64 if value < 0 { value = 0 } - return value / duration.Seconds() + + timeSec := duration.Seconds() + if timeSec == 0 { + return 0 + } + + return value / timeSec } diff --git a/metricbeat/module/kubernetes/fields.go b/metricbeat/module/kubernetes/fields.go index 6feb3a433208..7e772a7534b1 100644 --- a/metricbeat/module/kubernetes/fields.go +++ b/metricbeat/module/kubernetes/fields.go @@ -32,5 +32,5 @@ func init() { // AssetKubernetes returns asset data. // This is the base64 encoded gzipped contents of ../metricbeat/module/kubernetes. func AssetKubernetes() string { - return "eJzsXU9z47aSv8+nQM1psuXosLW1hzlsVeK8V881k3le25MctrYUmGxJiEmAAUB79D79KwD8JxIAQRGSPTZ1SGVsq/uH7gbQ3Wg0fkQPsP+IHsp74BQkiHcISSIz+Ijef2p++P4dQimIhJNCEkY/ov95hxBC7R+gHCQnifo2hwywgI9oi98hJEBKQrfiI/q/90Jk7y/Q+52Uxfv/V7/bMS7XCaMbsv2INjgT8A6hDYEsFR81gx8RxTn04KmP3BeKA2dlUf3EAk99ruiG8RyrHyNMUyQklkRIkgjENqhgqUA5pngLKbrfd/isKgpdNF1EuCAC+CPw5jc2UB5gPfn9dH2FDMGOKOvPoUjrTx9aFx6Hv0oQcpVkBKg8+JMa5wPsnxhPe7/zoFWfS00PwTdISqXXmpHwouAgWMkTiIfjxlCGFFlp9wGI8v6UGFzkBzASVsQHgDRZ9CHJSiGBX2imosAJXDTS+cGL6xH4fTxY/7i7u0YDkgPLZGlEUWieA5JDnlQClWvFKL4aKgyaBRqw6GNJ+X7NSxoPxu8gd8CR3EHNA5UCBEr5HvUZ9cE8ENrnNgPJJ0JTtbpW1EdUkheMxl2japJoh2maqVWqIxQvmv7aPROJWtQ1SbRhtWYClolH4IKwiKZREWxQDIfZh6Ald7C5zYRQTxIb4T7zHOSORbRHPTEtRAeDZiKiGTYj7lOt2RacJSCElaPNEG37fZdeUpQrAcng9zXNlJX3WX/dGwzk8vorEpAwmvaRtZxyyBnfq22dpEDl6n7femZDvhmjW8svjV/2Ebm+fIDqZ/VHiFBU86wwjEF8JFyWODsnworlGMBNKlasALpKWDlY/UahHbD+Uub3wNWKqwiiDcmg+QPG3WoUEnMJaQSjuTUGgwShCeglpjLumod1AqhAIJr1N/tqybW3vyrFqgCeAJUkg9V/OEfI7v+ExKYA84v1FDnUc74GgXKScFZNJ9TCcevENgxR5jP148eVlHmZYUkeAdlY+aDNN94amqakd6ia/igQQf4FZmbH1PQU0ArBJLV2IPu0GmNBOsA4UcUdmKfQsCLvwSAKRgU8q3oNhCn6HYI+vYK7KIM1PAQaQ8UVFDupodMf36bqgVl3GpMGWfn4O3k7tto68YGwQJYsS2/I8Zy8iN6CNXfTZZZhCTTZH2PJNm2JmuCFMlGFwPybGMepuyeNQopnQg0mOl0w92XyAPKsW07FGu2IkGzLcY4MCDfYUFdiCoqaptFkqPJO4zm0WGjXETY/DAPzDHpsUYdrMik5V+vYfNld0U1GtjsZYOqMbnlJKaHbqKFKu34metNS30YVI39WGWSSrozco6zkbdK/0qZAWGouVva4TIlcwaNLEVPZa3pI07OP1zDkoKBBGpFnTbLPvN1rqMSEzjvj6Ei3oRfliENHlmtJcnsqN8Wy/4uRhM2tIogGBDvpleBdfCxDef0VlQJvwSII17C7UPR3nfPQBshH9WCQjNsIjxMfY9BlYlmU+2wca0n9GZFv93PZGJ2S+iXjUImeYurcsA7QYsqUWFygRwEHgjVGAekIwwYWS2FVWPekFpVIcAbpepMx7PrDOuSoopwYY1DSxQLhmqb6N9votJBkEmcaO8JZxhIs8X0G6nvewWYkJ/L7G20KG0IhNfCb7Hu7DH5QP3FKBJENKqn+LqT2A7yMbcPzxyOj+sy2yg3fsImLEX7EJMP2JNT8BckVCaOQmTcWTqNwXWvpNENFCS5wQuReub526s2KWv3l65eOseRwyajF7vVLRS/p4UIhaiVwn1TM29vt3juKuIndaRto54lzOJ2DEA5+lyMWKsUoBJDDLuMD0qZhAXR4hhUtdfQ2Fuq+BY4cw53OlX5ZAjFicA73hfuVv3bQT3QtHfpHL967DBnzDAezMgi3j9mVEB+UKaBXNUdubm/9M6QG/MT4A6FbAe402GuQx+9mmEiADJNLgbewwWVmSSROSQ/aEbV5K8UGOfg0uyb+k/Ez4dG8nKia2cOY3ESs83kLEcUNY1JXsoi9kJBPDi7ehrNjl1LX/X7rMZhdQpXn/Xyx2BlijK+W6KKb2ecsy4Cbyw+zMvyXDbHqKkWc/P6zlKCesyr93GWuZy5vVf+Nx+4LziGsivpfjEbke0U3HAvJy0SWHIbEl2JeM5ylmHcp5l2KeQOGsRTz2oEsxbzBGJdi3qWYdynmnV/Ma/Eyp5b3PjH+8FcJpd3jPGbrU6BBOZym5G7+dv7ZEGxq66rN3OdLlHRDKBG7KO7E14ZYCGucpjFs+PdaL4rgiCGnUMhdVJ6a4uj0kZxEma8t324Fs6ZuD8xYCqtEBeyJZPb4+hjDhUeSaE8ipg+sjy1qyj6D3QHO5C5GXXjLvKGK7ImgU9Tk+zkZPI6jqnB21wcHSe5BNmsS4BT4ioh1joV05GTuGcsA9x29sUvru/bWutY1EajH410fja5WfddnPyFhdbeDbusNU/1a56xA7UN6bjS/kTssEeaAtkCBY2l6hdS1wtW6esCBUBXYKuF+6ncuQROSYW4Dc+jaK+1Ls70qLohDwngqjNwb45MkB/OzAnNJkjLD3AgB7bBALNEF6KkFof6mxHlhQTlcTHxpvw3hQq4rVtTRr2N6ce9dDVCNU/NALQ/1s75VdS97nByQYjGCp82FiMFJnMEg4ZsMt4ZfDZ3KEiBtmwOQR6AWcSSs2K8lsyFo9zQseqGeO/XmRXejKYWCa6yw33TjSO53+6I5YvdzzEHiFEscZPYj+jCUEBaCJUSvMk9E7rw68U0k+5ScvsM3ixAH3E/+IN8ECDipOJgEmgFh1C/5kyaYK85+nrq1TlzGmiQiFD3tSLKrltwnLNodx4qmzoOvo3cM+a3qGNIViD/tXpKIRxlfKfmrBKSTw2RDlIPAOkAsyYEmDQrZZp0R+hARzM1nxKHgIBSaqpuMa0Eg9JFlj5CuLRhPtS7UPG1y8a0QuCDxLeen66um30xlPR51xW08pHg/VM2HRhjHXTxoZ/HwMD3dfK0pTxB93An79eqXEd7d8HOO9965UKYjhuUu2XKXzPGJfZfsi7K37/sa2VJbbvssteW9T7za8qWEuAd4KSG2A19KiD0lxBSkspto6zX/9qqN7wYSII86T+ui1WSTObedRwViDsXzzcWnyda8boXccUxFTqR8OTq5s+qkSUMv9frmEyjNvy+l+hMFtFTpt5+BcN5CgX7noNlxEbgP6hw3uFtUL+PudovHdX+78WlK6szgHLNuk1x5gCe6i+/eE8YZjDFBgTMchaZIQmY6mpZKucq1xzt910CBOwd6y2IM2FvQlMXuDYrQvgM1werBHZs5OeyCpd9lCnuJSM1niUjbz/ekkO8uIn0TZ0Yv5JRkAOslNkWZ0mzvTTXYU1tq0wNF9JughHXWi3w+thwF9WC/yBm1tBmKN82O7jX0NtKBB9PFPeTeoeH6tZ8aGrE8Dc4O3WHDKz9WNgJpLsoriegbgiNiKfAW1ic7vTSggk9S1+dA4z5H7bRo+LafE7V37oxoWvMfbG3u9VhakhxdOe/qctJmkdMoVfK27iaduvh+G5I5XAbkGsH1u4/MldohPV93jymXW8b7enhvVAb29JjW0cMz7fzr1zG9PCZ18oiMzNvDI7CDhwfSjO4dIb07wg1jSt8OZ9eO46x6cr8O7yX/kF4dUTp1TO3TEQuR93r/9A4docYZ3J3j2N4c4VoNBzvStWFiT444S0t4N47JvTiO16WlD8fRXTjiKjKs/8bU7huxVBncd2N6143JIrKRCeu3cZTd2JzDseYax9w8Dmyr0WyHe5oEbUpepg/lPRg3vXLW9zSx5rtHtrYyAxG4M4yL/3ZPk2sF50aR7T2jxjbND8YexHOjm2ceTnwBT6u5MTmfV4u5zjihj72v1jvRLLj+45zQbTS1fzGkUYf2pCf0AiHO9F29ICcYwAjKs1iDfzBukxhkDUSyg7TM5rVI7WQOGnpL2mDI45WlDQaXTI9kM9b8tOOZlFmUgd1WVoqwlJAXcki65tmsBhHZqslqo7ukY5Z0zBikJR2zpGMmIlrSMUs6ZknHLOmYJR1jxeDt/mf423r/eSFM6fs3iMX63faO2yThP+H8YenfaIokQ0DTzmDs21Ig7DlpiQloPBOwj2jejLBj8s3EgqWrgoMKUxQC3Sw0nwvjmqWoJYoqoh4EVaAUg29NyjvqRuKVgs7p4N1ajGV8JxkgnufT2UAEbRgDHDNTpi4rfddn/LLf0z+6HdRAPO2T68Tet05ILMt4F6yLHRbuakH7APqD8FUhN8PRjNCHquPrBXrCROr/kcBzQrH/DUXAqfsOuL17biDKFqFmYpfvgcekIlB3LRahEraDNr9HgDF8RjthD7qGdsHM0t/vRkPoQ4PqUneZVEq75FjsPjNW/IyTB7bZXKC/ca5vg12XWXaBmv+tfj9Urfow3mhfrUAfLlleZCAhvWglcYkpZfKmpJoF4xfon//89RPJMkh/qIa/sk6UKXc+RhvL6/Jj110HQ9dVdTxJ7ZfXX3XvL2FYevReO7VngVSxgxTZGR7KyXcvZKRgseCQqKXgI/rv1X/FQN5gCRSoD/s4vLnlmC6pn7UfmVHi6R+KGhNBVeBtCudH+xnUCnx+3K3a6tp9103YhDP6J7uP5dIYalEcmsHpS7hLgy4rHAMa/WPBuQysdDoOY9US3D4zQvi0JFDBMtKj1Fy6SJTTPOMtlTanYEipmEi0L1IPjKTjd4q1KEUBNB1cQve5Rgfcu+mE2oSIilltdFvL1Q2tLWl+TxByGKsWLNkhMUj01xCesLC2zW5WKSzkuraAaDiU0HVD+RoGL6l9gsC3E7FXlEfZp4DTjFA35zGb+6Ui0LDGGwm8mVIaScL0UwxcOYEbTLKOJkL+x/9Pd6iXQpGxfT7zoYrOwtgSjLI2FtjS8yF4ug13j09WpIaLLRxpd7wiIwkOjwWPwlFzQYRu2ERPIgVBuKf90qxY6ZcWY1tfU3FsUX8QBSRzLszFwti2FXHorXPhlZ4PVodXALAitT5jEB2U4TME1L0BG2lxiNkHO2Zixp/ymBXY61bM3WwH+iB5CRdogzMBKiov6QNlT9Q9b0pa7RReI52VmNEoD/j4FsOY0X7n+u3pAuymH3b3sq8/uq4bPI2AmtFZtcbUtJI6XxPsjsyfK4T74rp7PRZ7Nop5VuQVWn8bsM6xy0l0p2+xn8o0u7pRMdK4Qk4KR1/i7zejawQMXBAhgcpHlpV5rO2qJYsM3XrvQhvOcv2XP6plEn707GnwrQBO1FZ7IJxTJQR+M0AVCUeq1jd/wuKZioe1N6DvrGPqIMwxBk4SxlP9ng3raMfhFzCOt7BOMux44z6A+60hgjSRJjUwsCwUEnC5LDTJMMlPZqZJhr8LY73+7dJjqWYw6zkMfiY0hbQWi5tVlUZcV/YzY27ctNn7eqLFnx9KbpqAnTZOEhBinfcL4Sdw+EmTQIqEnccJZ9r1b5cr18Syb6mzZk+kvobE/n7d4MfhiQGF7OraymzHhFyfhqMi7WI7MeyaxrgKj47r9HbCo/UezOps/aY+W78Gqjan1Wp17JF6THTzIs06I+nOOsTE2nCz4b0You3n4iBWzrIiWLXxmb8UnDBZ2IXqzlrGeAdpRp+b3eEL31V2sACObsw/bi3doULzmM+Fyz+H46FS83cqNnavO/ycSmjV4576ybyKE7rf6726BaervDjL+vf80ME50j34VpdYUtyUWbavuY1Ks1NupC+s/VWyg4dy5y0tHZpRFpfTnQbeVFj/V2MdOxPsS2kKAsOB0A3jOaToww7zVG9QAtIffBcI44QdhwN1Hp3L/ivNE1h0R2hmjvrqBfpDDfUPNdY/1GD/cOwfloEfMT5NTovSmB8uioyAQJINA1X/P92BrVoOSBIr41JR802UM4eotxUiT0IlK4UEfpw7fkUlcIozdHXd2H0lBDs3+Ga+MCssrgdVE0O/fLl1z4OGpWOExzB0BBgZw+n6HmeYJm6JBvD7zHCKfq7oNFblYDpnntcDG9BowkK65SoYP34sV4aCC33NQMVtTpsYM0LD4R82Egd7txiU7HsK9g/K9UOUpjmYS2mDVWWwAdq3npHXA2p1NU+4W75wxBoosYRNmcWLSGqK0UISn9DGUlpDj+tu1xFh834/+gDKszAb+G01gr7beoYY6UB4jfN3VJh0Yse606uk9qsPnFWXENEzxEuDag8fwBpcGzmcWs+dGKXjdb0sdTdK7oB9GWqulRsArFn09CM0sdY786JNp0t3lEXPco8JTdl6LS9SNJXH7orb5Y32/scNyEf1YJCv7o325Xl2C7nl+Ymui/TKW8YvL5EffpaXyMPwjHfQj1oFdFj6M8shmZOzGUqlU+pj5bY8DV19Auff8jT0VAEtT0O3nzf5NPTXwAehz/D+8t8dry73oZzjbWrj5FVg/h0AAP//kXovWQ==" + return "eJzsXU9z27iSv+dToHLKbHl02NraQw5bNeN5r54rmTyv7cwctrY0MNmSMCYBDgDa0fv0rwD+g0gABEVIdmzykIolsfuH7gbQ3QAaP6IH2H9ED+U9cAoSxDuEJJEZfETvP7Ufvn+HUAoi4aSQhNGP6H/eIYRQ9wOUg+QkUW9zyAAL+Ii2+B1CAqQkdCs+ov97L0T2/gK930lZvP9/9d2OcblOGN2Q7Ue0wZmAdwhtCGSp+KgZ/IgozqEHTz1yXygOnJVF/YkFnnqu6IbxHKuPEaYpEhJLIiRJBGIbVLBUoBxTvIUU3e8NPquagonGRIQLIoA/Am+/sYHyAOvJ76frK1QRNETZPIcibZ4+NBMeh79KEHKVZASoPPhJg/MB9k+Mp73vPGjVc6npIfgGSan02jASXhQcBCt5AvFw3FSUIUVW2n0Aorw/JQYX+QGMhBXxASBNFn1IslJI4BeaqShwAhetdH7w4noEfh8P1j/u7q7RgOTAMlkaURSa54DkkCeVQOVaMYqvhhqDZoEGLPpYUr5f85LGg/E7yB1wJHfQ8EClAIFSvkd9Rn0wD4T2uc1A8onQVI2uNfURleQFo3HHqIYk2mGaZmqUMoTiRdMfu2ciUYO6Jok2rNFMwDDxCFwQFtE0aoItimEz+xC05A4mt5kQmk5iI9xnnoPcsYj2qDumheig0UxENMO2xX2qDduCswSEsHK0GaJtvjfpJUW5EpAMvm9opqy8z/rj3qAhl9dfkYCE0bSPrOOUQ874Xk3rJAUqV/f7zjMb8s0Y3Vq+rPyyj8j18gGqn9WPEKGo4VljGIP4SLgscXZOhDXLMYCbVKxYAXSVsHIw+o1CO2D9pczvgasRVxFEG5JB+wPG3WoUEnMJaQSjua0MBglCE9BDTG3cDQ9rB1CBQDTrb+fVkmtvf1WKVQE8ASpJBqv/cLaQ3f8JiU0B1RfrKXJo+nwDAuUk4azuTqiD49aJrRmizGfqx48rKfMyw5I8ArKx8kGbb7wNNE1Jz1AN/VEggvwLqp4dU9NTQCsEk9RqQPZpNcaAdIBxoooNmKfQsCLvwSAKRgU8q3orCFP0OwR9egWbKIM1PAQaQ8U1FDupodMf36aahllnmioNsvLxd/J2TLVN4gNhgSxZll6T4zl5Eb0Fa+7GZJZhCTTZH2PJNm2JhuCFMlGFoPqbVI6TOSeNQopnQi0mOl0w92XyAPKsU07NGu2IkGzLcY4qEG6woa7EFBQNzUqToco7jefQYaGmI1x9GAbmGfTYoQ7XZFJyrsax+bK7opuMbHcywNQZ3fKSUkK3UUOVbvxM9KSl3kY1I39WGWSSriq5RxnJu6R/rU2BsNRcrOxxmRK5gkeXIqay1/SQpmdvb8WQg4IGaUSeDck+826uoRITOm+Nw5BuSy/KEoeOLNeS5PZUbopl/4uRhM2tIogGBI30SvAsPpahvP6KSoG3YBGEq9kmFP2usx/aAPmoHjSScRvhceJjDEwmlkG5z8YxljTPiHzN57I1OiX1S8ahFj3F1DlhHaDFlCmxuECPAg4EWxkFpCMMW1gshVVhnZM6VCLBGaTrTcaw64dNyFFHOTHaoKSLBcINTfU32+i0kGQSZxo7wlnGEizxfQbqPW9jM5IT+f21NoUNoZBW8NvsezcMflCfOCWCyAaVVL8LqX0BL2Pb8PzxSKs+s61ywzds4mCEHzHJsD0JNX9AckXCKKTnjYXTKFzXWjptU1GCC5wQuVeur516O6LWv3z90qksOVwyarB7/VLRQ3q4UIgaCdwrFfPmdrv3jiJOYnfaBrp+4myOsRDCwe9yxEKlGIUActhlfEDaNCyADtewoqWO3sZA3bfAkWW407nSL0sglRiczX3hfuWvBvqJrqVD/+jFe5chbZ7hYNYG4fYxTQnxwTYF9Kr6yM3trb+HNICfGH8gdCvAnQZ7DfL4vWomEiDD5FLgLWxwmVkSiVPSg3ZEXd5KsUEOPu2sif9k/Ex4NC8nqrb3MCY3Eff5vIWI4oYxqXeyiL2QkE8OLt6Gs2OXkul+v/UYzC6h2vN+vljsDDHGV0t0YWb2Ocsy4NXhh1kZ/suWWH2UIk5+/1m2oJ5zV/q5t7meeXur+jceuy84h7Bd1P9iNCLfK7rhWEheJrLkMCS+bOatmrNs5l028y6beQOasWzmtQNZNvMGY1w28y6beZfNvPM381q8zKnbe58Yf/irhNLucR4z9SnQoBzOasvd/On8c0Ww3VtXT+Y+X6KkG0KJ2EVxJ762xEJY4zSNYcO/N3pRBEcMOYVC7qLy1BRHu4/kJEp/7fiaO5g1dXtgxlJYJSpgTySzx9fHGC48kkR7EjF9YL1s0VD2GewOcCZ3MfaFd8xbqsieCDrFnnw/pwqPY6kqnN31wUKSu5HtmAQ4Bb4iYp1jIR05mXvGMsB9R2/s0PquO7WudU0E6vF410ejd6u+67OfkLC624FZeqPa/drkrEDNQ7pvtN/IHZYIc0BboMCxrGqFNHuF63H1gAOhKrBVwv3Ur1yCJiTD3Abm0LVX2pfV9Kq4IA4J46mo5N4anyQ5VJ8VmEuSlBnmlRDQDgvEEr0BPbUg1G9KnBcWlMPBxJf22xAu5LpmRR31OqZv7r1rAKp2ah6o46E+61uVedjj5IAUixE8XS5EDFbiKgwSvslwa/i1olNbAqRdcQDyCNQijoQV+7VkNgTdnIZFL9Rzp9686G40pVBwrRX2i24cyf1uX7RL7H6OOUicYomDzH5EHxUlhIVgCdGjzBORO69OfB3J3iWnz/DtIMQB95M/yNcBAlYqDjqBZkAY9Uv+pAnmmrOfpy6tE5exJokIRU87kuzqIfcJi27GsaJp8uDr6BVDfqsrhpgC8afdSxJxKeMrJX+VgHRymGyIchCYAcSSHGjToJBt1hmhDxHB3HxGHAoOQqGpq8m4BgRCH1n2COnagvFU40LD0yYX3wiBCxLfcn66vmrrzdTW41FX3MJDivdDXXxohHHcwYMag4eH6en6a0N5gujjdtivV7+M8DbDzzneu3GgTEcMy1my5SyZ44l9luyLsrfv+xjZsrfc9ix7y3tPvL3lyxbiHuBlC7Ed+LKF2LOFmIJUdhNtvObfXrXx3UAC5FHnaV202mwy57b1qEDMoXi+ufi02ZrXrZA7jqnIiZQvRyd3Vp20aehlv371BErz78tW/YkCWnbpd89AOG9hg76x0Ow4CNwHdY4T3B2ql3F2u8PjOr/d+jQldWZwjhm3Sa48wBOdxXfPCeMMxpigwB6OQlMkIT0dTUulXOXa450+a6DAmQO9ZTEGzC1oymD3BkVon4HaYPXgjM2cHHbB0u8yhb1EpNWzRKTd8z0p5LuLSN/EmtELWSUZwHqJRVGmFNt7UwX21JTa1kAR/SIoYZX1Iq+PLUtBPdgvskctZYbidbOjaw29jXTgQXdxN7m3aLh+7auGlVieBmuH7rDhlS8rVwJpD8oriegTgiNiKfAW1idbvaxABa+krs+Bxr2OapRo+LafE7UbZ0Y0rfkXtrbneiwlSY7eOe+qctJlkdMou+Rt1U2MffH9MiRzuAzItYLrVx+ZK7VDer7qHlMOt4zX9fCeqAys6TGtooen2/nHr2NqeUyq5BEZmbeGR2AFDw+kGdU7Qmp3hBvGlLodzqodx1n15Hod3kP+IbU6olTqmFqnIxYi7/H+6RU6Qo0zuDrHsbU5wrUaDnakasPEmhxxhpbwahyTa3Ecr0tLHY6jq3DEVWRY/Y2p1TdiqTK47sb0qhuTRWQjE1Zv4yi7sTmHY8U1jjl5HFhWo50O9zQJmpS8TB/Ke6jc9NpZ39PEmu8emdrKDETgzDAu/ts9Ta4VnBtFtneNGtu0H4xdiOdGN888nPgCrlZzY3JerxZznHFCH7tfrbeiWXD945zQbTS1f6lII4P2pCv0AiHO9F29ICcYwAjKs1iDvzFukxhkDUSyg7TM5pVINTIHLb0lbTDk8crSBoNDpkeyGSt+angmZRalYbe1lSIsJeSFHJJueLajQUS2qrPa6C7pmCUdMwZpSccs6ZiJiJZ0zJKOWdIxSzpmScdYMXir/1X8bbX/vBCm1P0bxGL9anvHTZLwn3D+sPRvNEWSIaCp0Rj7tBQIe05aYgIaTwfsI5rXI+yYfD2xYOmq4KDCFIVAFwvN58K4ZinqiKKaqAdBHSjF4NuQ8ra6lXitoHM6eLcWYxmfSQaI5/l0NhBBE8YAx8yUqctK3/UZv+z79I8uBzUQT3flOrHXrRMSyzLeAetih4V7t6C9Af1G+HYht83RjNCHuuLrBXrCROr/SOA5odh/hyLg1H0G3F49NxBlh1Azscv3wGNSEah7LxahEraDMr9HgKn4jFbCHlQNNcHM0t/vlYbQhxbVpa4yqZR2ybHYfWas+BknD2yzuUB/41yfBrsus+wCtf+tvx+qVj2Mt9pXI9CHS5YXGUhILzpJXGJKmbwpqWbB+AX65z9//USyDNIf6uavrB1lypmP0cLyevux66xDRde163iS2i+vv+raX6Ji6dF749SeBVLNDlJkZ3goJ9+5kJENiwWHRA0FH9F/r/4rBvIWS6BAfdjH4c3djumS+lnrkVVKPP1FUWMiqDd4VxvnR+sZNAp8ftyd2pq9+66TsAln9E92H8ulqahFcWgGqy/hLg26rHEMaPSXBecysNIxHMa6JLi9Z4Tw6UiggmWkR6k9dJEop3nGXSpdTqEipWIi0d1IPTASw+8Ua1GKAmg6OITuc40OuJvphMaEiIpZbXQ7y9UFrS1pfk8QchirFizZITFI9DcQnrCwls1uRyks5LqxgGg4lNB1QfkGBi+pvYPAtxOxV5RH2aeA04xQN+cxm/ulJtCyxhsJvO1SGknC9FUMXDmBG0wyQxMh//H/6Q71Uigyts9nXlRhDIwdwShjY4EtNR+Cu9tw9vhkRVpxsYUj3YxXZCTB4bHgUTgaLojQDZvoSaQgCPeUX5oVK/3SYez219QcO9QfRAHJnANzsTB2ZUUcejMOvNLzwTJ4BQArUus1BtFBVXyGgMwTsJEGh5h1sGMmZvwpj1mBvS7FbGY70AfJS7hAG5wJUFF5SR8oe6LuflPSeqbwGumsxIxGecDHNxjGjPaN47enC7DbetjmYV9/dN0UeBoBNaOyaoOpLSV1viLYhsyfK4T74jp7PRZ7top5VuQ1Wn8ZMGPZ5SS606fYT2Wapm5UjDSukJPC0Yf4+8XoWgEDF0RIoPKRZWUea7rqyKKKbjN3oQ1nuf7lj2qYhB89cxp8K4ATNdUeCOdUCYHfKqCKhCNV6+s/YfFMzcNaG9C31jG1EdUyBk4SxlN9nw0ztOPwCxjHW1gnGXbccR/A/bYigjSRNjUwsCwUEnC5LDTJMMlPZqZJhr8LY73+7dJjqVVj1nMY/ExoCmkjFjerOo24ru1nRt+46bL3TUeL3z+U3DQBO22cJCDEOu9vhJ/A4SdNAikSdh4n7GnXv12uXB3LPqXO6j2R6hoS+/11g4/DEwMK2dW1ldmOCbk+DUdF2sV2Ytg1jXEdHh1X6e2ES+s9mPXa+k2ztn4NVE1Oq9Xq2CX1mOjmRZpNRtKddYiJteVmw3sxRNvPxUGsnGVNsC7jM38oOGGy0ITqzlrGuAdpRp2b3eEN33V2sACObqo/bi3VoULzmM+Fy9+H46FS/XcqNnavK/ycSmj15Z76yryaE7rf67m6A6d3eXGW9c/5oYN1pHvwjS6xpLgps2zfcBuVprHdSB9Y+6tkBxflzhtaDJpRBpfTrQbe1Fj/V2MdWxPsS2kKgooDoRvGc0jRhx3mqZ6gBKQ/+A4Qxgk7DhvqXDqX/VuaJ7AwW1j1HPXqBfpDNfUP1dY/VGP/cMwfloYf0T5NTouyMj9cFBkBgSQbBqr+P92BrRoOSBIr41JT83WUM4eotzUiT0IlK4UEfpw7fkUlcIozdHXd2n0tBDs3+Fa9MCssbhrVEEO/fLl194OWpaOFxzB0BBgZw+n6HmeYJm6JBvD7zHCKfq7ptFblYDqnnzcNG9Bow0K65SoYP74tVxUFF/qGgYrbnDYxZoQVh3/YSPTmHfuIP1K0v5FSe3O65YUjhh6JJWzKLF4g0FCMFgn4hDaWSRo6Onc7Q4TttfnoA6gJvZo3b+sW9L3FM4QmB8Jrfa6jopMT+7NGiZDGnT3wEV1CRM8Qpgw2WfgANuA6h/3UejZCA8PZeVnqbpVsgH0Zam6UGwCsHfT03S+xxrvqIhmjOHaUQc9yfAhNmfEsF0G0G37dG12Xq9H7jxuQj+pBI1/d1ejLregWcsutD6aL9MortS8XgB8+ywXgYXjGC9dH3XxzuONmlkMyJ1UylIqxw8bKbbmRuX4C+99yI/NUAS03MnfPm7yR+WvgPcxnuPb4747LjvtQznEldOXk1WD+HQAA///hGQsL" } diff --git a/metricbeat/module/kubernetes/state_service/_meta/fields.yml b/metricbeat/module/kubernetes/state_service/_meta/fields.yml index 405c846acdd9..4a32c52dc771 100644 --- a/metricbeat/module/kubernetes/state_service/_meta/fields.yml +++ b/metricbeat/module/kubernetes/state_service/_meta/fields.yml @@ -28,10 +28,6 @@ - name: ingress_hostname type: ip description: Ingress Hostname - - name: labels.* - type: object - object_type: keyword - description: Labels for service - name: created type: date description: Service creation date diff --git a/metricbeat/module/system/_meta/config.reference.yml b/metricbeat/module/system/_meta/config.reference.yml index c22fc6ac2794..a6023d23ddc7 100644 --- a/metricbeat/module/system/_meta/config.reference.yml +++ b/metricbeat/module/system/_meta/config.reference.yml @@ -14,12 +14,13 @@ #- fsstat # File system summary metrics #- raid # Raid #- socket # Sockets and connection info (linux only) + #- service # systemd service information enabled: true period: 10s processes: ['.*'] # Configure the metric types that are included by these metricsets. - cpu.metrics: ["percentages"] # The other available options are normalized_percentages and ticks. + cpu.metrics: ["percentages","normalized_percentages"] # The other available option is ticks. core.metrics: ["percentages"] # The other available option is ticks. # A list of filesystem types to ignore. The filesystem metricset will not @@ -71,3 +72,6 @@ # Diskio configurations #diskio.include_devices: [] + + # Filter systemd services by status or sub-status + #service.state_filter: [] diff --git a/metricbeat/module/system/cpu/config.go b/metricbeat/module/system/cpu/config.go index 169d59ca1f9f..291ee7963ef0 100644 --- a/metricbeat/module/system/cpu/config.go +++ b/metricbeat/module/system/cpu/config.go @@ -62,5 +62,5 @@ func (c Config) Validate() error { } var defaultConfig = Config{ - Metrics: []string{percentages}, + Metrics: []string{percentages, normalizedPercentages}, } diff --git a/metricbeat/module/windows/perfmon/doc.go b/metricbeat/module/windows/perfmon/doc.go index 66e0df7b5da5..bab02b5f1285 100644 --- a/metricbeat/module/windows/perfmon/doc.go +++ b/metricbeat/module/windows/perfmon/doc.go @@ -15,12 +15,4 @@ // specific language governing permissions and limitations // under the License. -// Package perfmon implements a Metricbeat metricset for reading Windows -// performance counters. package perfmon - -//go:generate go run mkpdh_defs.go -//go:generate go run ../run.go -cmd "go tool cgo -godefs defs_pdh_windows.go" -goarch amd64 -output defs_pdh_windows_amd64.go -//go:generate go run ../run.go -cmd "go tool cgo -godefs defs_pdh_windows.go" -goarch 386 -output defs_pdh_windows_386.go -//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zpdh_windows.go pdh_windows.go -//go:generate gofmt -w defs_pdh_windows_amd64.go defs_pdh_windows_386.go zpdh_windows.go diff --git a/metricbeat/module/windows/perfmon/perfmon.go b/metricbeat/module/windows/perfmon/perfmon.go index 05210599854e..a3d048ebcf73 100644 --- a/metricbeat/module/windows/perfmon/perfmon.go +++ b/metricbeat/module/windows/perfmon/perfmon.go @@ -90,7 +90,7 @@ func New(base mb.BaseMetricSet) (mb.MetricSet, error) { // Fetch fetches events and reports them upstream func (m *MetricSet) Fetch(report mb.ReporterV2) error { // if the ignore_non_existent_counters flag is set and no valid counter paths are found the Read func will still execute, a check is done before - if len(m.reader.query.counters) == 0 { + if len(m.reader.query.Counters) == 0 { return errors.New("no counters to read") } diff --git a/metricbeat/module/windows/perfmon/pdh_integration_windows_test.go b/metricbeat/module/windows/perfmon/perfmon_test.go similarity index 94% rename from metricbeat/module/windows/perfmon/pdh_integration_windows_test.go rename to metricbeat/module/windows/perfmon/perfmon_test.go index 1fe12ab20d1f..77599ce82c4d 100644 --- a/metricbeat/module/windows/perfmon/pdh_integration_windows_test.go +++ b/metricbeat/module/windows/perfmon/perfmon_test.go @@ -24,6 +24,8 @@ import ( "testing" "time" + "github.com/elastic/beats/metricbeat/helper/windows/pdh" + "github.com/elastic/beats/libbeat/common" "github.com/pkg/errors" @@ -106,7 +108,7 @@ func TestCounterWithNoInstanceName(t *testing.T) { } func TestQuery(t *testing.T) { - var q Query + var q pdh.Query err := q.Open() if err != nil { t.Fatal(err) @@ -117,7 +119,7 @@ func TestQuery(t *testing.T) { if err != nil { t.Fatal(err) } - err = q.AddCounter(path[0], counter, false) + err = q.AddCounter(path[0], counter.InstanceName, counter.Format, false) if err != nil { t.Fatal(err) } @@ -177,7 +179,7 @@ func TestNonExistingCounter(t *testing.T) { config.CounterConfig[0].Format = "float" handle, err := NewReader(config) if assert.Error(t, err) { - assert.EqualValues(t, PDH_CSTATUS_NO_COUNTER, errors.Cause(err)) + assert.EqualValues(t, pdh.PDH_CSTATUS_NO_COUNTER, errors.Cause(err)) } if handle != nil { @@ -200,7 +202,7 @@ func TestIgnoreNonExistentCounter(t *testing.T) { values, err := handle.Read() if assert.Error(t, err) { - assert.EqualValues(t, PDH_NO_DATA, errors.Cause(err)) + assert.EqualValues(t, pdh.PDH_NO_DATA, errors.Cause(err)) } if handle != nil { @@ -221,7 +223,7 @@ func TestNonExistingObject(t *testing.T) { config.CounterConfig[0].Format = "float" handle, err := NewReader(config) if assert.Error(t, err) { - assert.EqualValues(t, PDH_CSTATUS_NO_OBJECT, errors.Cause(err)) + assert.EqualValues(t, pdh.PDH_CSTATUS_NO_OBJECT, errors.Cause(err)) } if handle != nil { @@ -231,7 +233,7 @@ func TestNonExistingObject(t *testing.T) { } func TestLongOutputFormat(t *testing.T) { - var query Query + var query pdh.Query err := query.Open() if err != nil { t.Fatal(err) @@ -243,8 +245,8 @@ func TestLongOutputFormat(t *testing.T) { t.Fatal(err) } assert.NotZero(t, len(path)) - err = query.AddCounter(path[0], counter, false) - if err != nil && err != PDH_NO_MORE_DATA { + err = query.AddCounter(path[0], counter.InstanceName, counter.Format, false) + if err != nil && err != pdh.PDH_NO_MORE_DATA { t.Fatal(err) } @@ -271,7 +273,7 @@ func TestLongOutputFormat(t *testing.T) { } func TestFloatOutputFormat(t *testing.T) { - var query Query + var query pdh.Query err := query.Open() if err != nil { t.Fatal(err) @@ -283,8 +285,8 @@ func TestFloatOutputFormat(t *testing.T) { t.Fatal(err) } assert.NotZero(t, len(path)) - err = query.AddCounter(path[0], counter, false) - if err != nil && err != PDH_NO_MORE_DATA { + err = query.AddCounter(path[0], counter.InstanceName, counter.Format, false) + if err != nil && err != pdh.PDH_NO_MORE_DATA { t.Fatal(err) } diff --git a/metricbeat/module/windows/perfmon/reader.go b/metricbeat/module/windows/perfmon/reader.go index b7837862ffca..51eef6b7e957 100644 --- a/metricbeat/module/windows/perfmon/reader.go +++ b/metricbeat/module/windows/perfmon/reader.go @@ -24,6 +24,8 @@ import ( "strconv" "strings" + "github.com/elastic/beats/metricbeat/helper/windows/pdh" + "github.com/pkg/errors" "github.com/elastic/beats/libbeat/common" @@ -37,7 +39,7 @@ var ( // Reader will contain the config options type Reader struct { - query Query // PDH Query + query pdh.Query // PDH Query instanceLabel map[string]string // Mapping of counter path to key used for the label (e.g. processor.name) measurement map[string]string // Mapping of counter path to key used for the value (e.g. processor.cpu_time). executed bool // Indicates if the query has been executed. @@ -47,7 +49,7 @@ type Reader struct { // NewReader creates a new instance of Reader. func NewReader(config Config) (*Reader, error) { - var query Query + var query pdh.Query if err := query.Open(); err != nil { return nil, err } @@ -63,8 +65,8 @@ func NewReader(config Config) (*Reader, error) { if err != nil { if config.IgnoreNECounters { switch err { - case PDH_CSTATUS_NO_COUNTER, PDH_CSTATUS_NO_COUNTERNAME, - PDH_CSTATUS_NO_INSTANCE, PDH_CSTATUS_NO_OBJECT: + case pdh.PDH_CSTATUS_NO_COUNTER, pdh.PDH_CSTATUS_NO_COUNTERNAME, + pdh.PDH_CSTATUS_NO_INSTANCE, pdh.PDH_CSTATUS_NO_OBJECT: r.log.Infow("Ignoring non existent counter", "error", err, logp.Namespace("perfmon"), "query", counter.Query) continue @@ -85,7 +87,7 @@ func NewReader(config Config) (*Reader, error) { return nil, errors.Errorf(`failed to expand counter (query="%v")`, counter.Query) } for _, v := range childQueries { - if err := query.AddCounter(v, counter, len(childQueries) > 1); err != nil { + if err := query.AddCounter(v, counter.InstanceName, counter.Format, len(childQueries) > 1); err != nil { return nil, errors.Wrapf(err, `failed to add counter (query="%v")`, counter.Query) } r.instanceLabel[v] = counter.InstanceLabel @@ -104,8 +106,8 @@ func (r *Reader) RefreshCounterPaths() error { if err != nil { if r.config.IgnoreNECounters { switch err { - case PDH_CSTATUS_NO_COUNTER, PDH_CSTATUS_NO_COUNTERNAME, - PDH_CSTATUS_NO_INSTANCE, PDH_CSTATUS_NO_OBJECT: + case pdh.PDH_CSTATUS_NO_COUNTER, pdh.PDH_CSTATUS_NO_COUNTERNAME, + pdh.PDH_CSTATUS_NO_INSTANCE, pdh.PDH_CSTATUS_NO_OBJECT: r.log.Infow("Ignoring non existent counter", "error", err, logp.Namespace("perfmon"), "query", counter.Query) continue @@ -118,7 +120,7 @@ func (r *Reader) RefreshCounterPaths() error { // there are cases when the ExpandWildCardPath will retrieve a successful status but not an expanded query so we need to check for the size of the list if err == nil && len(childQueries) >= 1 && !strings.Contains(childQueries[0], "*") { for _, v := range childQueries { - if err := r.query.AddCounter(v, counter, len(childQueries) > 1); err != nil { + if err := r.query.AddCounter(v, counter.InstanceName, counter.Format, len(childQueries) > 1); err != nil { return errors.Wrapf(err, "failed to add counter (query='%v')", counter.Query) } r.instanceLabel[v] = counter.InstanceLabel diff --git a/metricbeat/module/windows/perfmon/reader_test.go b/metricbeat/module/windows/perfmon/reader_test.go index 8b4a921b3c37..dd5a58aa5225 100644 --- a/metricbeat/module/windows/perfmon/reader_test.go +++ b/metricbeat/module/windows/perfmon/reader_test.go @@ -25,6 +25,8 @@ import ( "github.com/stretchr/testify/assert" ) +var validQuery = `\Processor Information(_Total)\% Processor Time` + // TestNewReaderWhenQueryPathNotProvided will check for invalid/no query. func TestNewReaderWhenQueryPathNotProvided(t *testing.T) { counter := CounterConfig{Format: "float", InstanceName: "TestInstanceName"} @@ -51,9 +53,9 @@ func TestNewReaderWithValidQueryPath(t *testing.T) { assert.Nil(t, err) assert.NotNil(t, reader) assert.NotNil(t, reader.query) - assert.NotNil(t, reader.query.handle) - assert.NotNil(t, reader.query.counters) - assert.NotZero(t, len(reader.query.counters)) + assert.NotNil(t, reader.query.Handle) + assert.NotNil(t, reader.query.Counters) + assert.NotZero(t, len(reader.query.Counters)) defer reader.Close() } diff --git a/metricbeat/module/windows/service/doc.go b/metricbeat/module/windows/service/doc.go index 608c063433b7..1bcbc98e250c 100644 --- a/metricbeat/module/windows/service/doc.go +++ b/metricbeat/module/windows/service/doc.go @@ -18,7 +18,7 @@ // Package service implements a Metricbeat metricset for reading Windows Services package service -//go:generate go run ../run.go -cmd "go tool cgo -godefs defs_service_windows.go" -goarch amd64 -output defs_service_windows_amd64.go -//go:generate go run ../run.go -cmd "go tool cgo -godefs defs_service_windows.go" -goarch 386 -output defs_service_windows_386.go +//go:generate go run ../../../helper/windows/run.go -cmd "go tool cgo -godefs defs_service_windows.go" -goarch amd64 -output defs_service_windows_amd64.go +//go:generate go run ../../../helper/windows/run.go -cmd "go tool cgo -godefs defs_service_windows.go" -goarch 386 -output defs_service_windows_386.go //go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zservice_windows.go service_windows.go //go:generate gofmt -w defs_service_windows_amd64.go defs_service_windows_386.go diff --git a/metricbeat/tests/system/test_base.py b/metricbeat/tests/system/test_base.py index 59fb16c7c869..5e52872bb7ba 100644 --- a/metricbeat/tests/system/test_base.py +++ b/metricbeat/tests/system/test_base.py @@ -71,7 +71,7 @@ def test_dashboards(self): ) exit_code = self.run_beat(extra_args=["setup", "--dashboards"]) - assert exit_code == 0 + assert exit_code == 0, 'Error output: ' + self.get_log() assert self.log_contains("Kibana dashboards successfully loaded.") @unittest.skipUnless(INTEGRATION_TESTS, "integration test") diff --git a/packetbeat/docs/fields.asciidoc b/packetbeat/docs/fields.asciidoc index b36866472621..ccb26248d48a 100644 --- a/packetbeat/docs/fields.asciidoc +++ b/packetbeat/docs/fields.asciidoc @@ -9407,7 +9407,7 @@ type: keyword The list of compression methods the client supports. See https://www.iana.org/assignments/comp-meth-ids/comp-meth-ids.xhtml -type: array +type: keyword -- @@ -9538,7 +9538,7 @@ The hello extensions provided by the server. -- Negotiated application layer protocol -type: array +type: keyword -- @@ -9658,7 +9658,7 @@ type: keyword -- Subject Alternative Names for this certificate. -type: array +type: keyword -- @@ -9858,7 +9858,7 @@ type: keyword -- Subject Alternative Names for this certificate. -type: array +type: keyword -- diff --git a/packetbeat/packetbeat.reference.yml b/packetbeat/packetbeat.reference.yml index 86ceb94d6c4b..bffddad50c75 100644 --- a/packetbeat/packetbeat.reference.yml +++ b/packetbeat/packetbeat.reference.yml @@ -1757,6 +1757,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/packetbeat/protos/tls/_meta/fields.yml b/packetbeat/protos/tls/_meta/fields.yml index 1537beed8387..89f0c0c7b6f2 100644 --- a/packetbeat/protos/tls/_meta/fields.yml +++ b/packetbeat/protos/tls/_meta/fields.yml @@ -44,7 +44,7 @@ connection with the client. - name: supported_compression_methods - type: array + type: keyword description: > The list of compression methods the client supports. See https://www.iana.org/assignments/comp-meth-ids/comp-meth-ids.xhtml @@ -123,7 +123,7 @@ description: The hello extensions provided by the server. fields: - name: application_layer_protocol_negotiation - type: array + type: keyword description: Negotiated application layer protocol - name: session_ticket @@ -185,7 +185,7 @@ The algorithm used for the certificate's signature. - name: alternative_names - type: array + type: keyword description: Subject Alternative Names for this certificate. - name: subject @@ -281,7 +281,7 @@ The algorithm used for the certificate's signature. - name: alternative_names - type: array + type: keyword description: Subject Alternative Names for this certificate. - name: subject diff --git a/packetbeat/protos/tls/fields.go b/packetbeat/protos/tls/fields.go index a750dfdaf939..f25aabd7bf5e 100644 --- a/packetbeat/protos/tls/fields.go +++ b/packetbeat/protos/tls/fields.go @@ -32,5 +32,5 @@ func init() { // AssetTls returns asset data. // This is the base64 encoded gzipped contents of protos/tls. func AssetTls() string { - return "eJzsWktv20YQvvtXDHxpAthyi6KH+lDAkH0wECRBnaC9ESvuSJx6ucvsDiUzv77YJSmt+JDlZ5NUOlkiOfPN+5s1T+EWq3Ng5RKJLEihPAJgYoXncHzZ/ASf3t0cHwFIdKmlgsnoc/jjCAAgvuXUFZjSnFLAJWqGOaGSbnIEzV/n4YlT0CLHoDN8B+CqwHNYWFMWzS/x/fEzEcb203/YfyTORak4aQTBXCiH0fWugo2KJVpHRkdXWh23WK2MlVtXBjyy+XzKsBUHZg6cofcSFNawSY2C0qGcbD2DdyIvguv9jb9Mfj0+GoBo0ZV50JnkyJmRzwH2ugbo0AW8mXAwQ9S1MpQn4WqpJVpVkV5Arbm2AT5oBDPvSDwmeQxzY4PRrdzrSzAWjpnSW+TN5fo74B2j9vdNhuxOFaHmJEXLPssEY2LxS4mOccgFM2MUCr2/C/7KkDO0jR/sEm1ww1pHuFCDADYgSs5QcwACxA5V1wOl854S7TMR8F32ZaiUGbCnm+NDWQz3ZPLuBLnXQ7BHVs8qWGWUZrG3VuQydMBmQF5q8rzUtRdlab3HOCPXZsyWp2LzmusJ9W14ooWfNX0pEXSZz3w2GCDp4zyvtirEp24w0ViLrjBakl4M2qc1pl4brIhjt4ybVhaFsYwySU1e2MbQuuLciLXCWlE9LpqKHPtQRsqa8nZxEBtUbjIg5gYRMubCnZ+drVarCQktJsYuzoRztNA5anZnXv6pF3xKsvNtcpdxrsb8sW4LY8b3S6NnvDc0VFYkzSftkiRKn7RxXLqShgttK2ShXyT+S0Ja+nQeqj24Nzt7wN810cmMYy/edb0UwxBFoRrdiRIV2qSty0TjwjA9G6yhZPKfFm6E5DQgWXeIraQiBytSKlS98dOkVxQwUPP1uHhRM1AvOGsbXFvztd4ToPk6dU78PBMaMC+4Asd2uAsEaAaEXPoh4LCtprqNBLFut+nrntA0334tPKf5TRR9Z2/1AWeCnxC7tQGhWl8F/pVS/tYUpqVdIkxtVbBZWFFkFby5mk7fQhou7EAEG+DdLrHLWFpowaXFRKiFscRZ/ioGr/XCRm8dt1xUMEMfJSANkhbEQo1Z3ErZnZKYJoUhzS6ZG5sL/i9C+uZq+hYCCmhATOC67r8YGs2YiRikbT3ZS/BUaCiEvSexk1KHm2TyGvZH0yvAXaFFUDhnaGH4NP0ofEeZoeBBptlMq++caQb7yYUdpKFjMW/sC7wOt3s5GS0ydLwW36vxhv9r45eSFHGE3HEEcbMnkI6zKHh5B41VmA5TvZfwZ5/jrRF0TJ9bk4fvnh4OSFtzp4dZ+8OS9lcjqXV8HkNSn8wOh5eMnhnvG2EoY5WwTQO/DZ732fcOT8y0NqVOsW4qosP4As/x47ONxYi07QjBhVqJynXZ4TdE86JARUzP+6OmCvJ/Ov+bFvjDzv/+SdpeLGAL6HTz8MgeHbptdErWm8ptx3oUrVBmoHNvIfz7t59/b2LbytoxmCwJldTj4ymzKQzZjWt+co3oZjKNAtCGkxnOje1nbK1ddqPUU33pQ1HLiI/gojiRC4xmKRT1SztGIuY86oY9gQQRIzjwrqCh7aKFUJQzRWlyi9Vmf3oJNrQW3uGQ2wGs0XhNQ6SyPvuGP28uTuDy5gKMhavp5c3FPsY5+joW7vvT+4a+YsuLY4hjGd5fSV/Npb2SaLGMohWK0WrBtMSkPvUaxrrHsedNOfsHU4aLjUh470UOxnvHoWwQ83hy1+Kw6Bk46jXl7gLoSrmf2KWm1GyHmdkD58+0FgWpkbvHnrELoenrsx0ofojkBRV7axcqKTU9C038rIkD4ye9pWInljD9dDpMNR4I4GMjy7cRi4t2ASHdxngnktTkudGhYJ4DjK8SDyQzjoOGdqmiaNTvKJ0YmjKpUMTPkqPvGlmjtUrOlaPza49SvdJMXLUHDK70i4yWoW2FfwMeKvZQsYeKfWrFdk4jX2wbaQ+0DtvIYRs5bCPjxh22kcM2cuA2B25z4DaHbeTR3jhU7KFiv8uKHd9GkjQTNPRGeH/cb5eCfyxwIls6XpOP9p9L+70N/HI49nwrWSi0nHhFbkD1g98zv9A1XEiNZkG6ft+3frMjqAqCA0hcoq2aHy2mSEuUk6N/AwAA//9yUhuG" + return "eJzsWs1u20YQvvspBr40AWy5RdFDfShgyD4YCJKgTtDeiBV3JE693GV2h5KZpy92SUor/shKLLtJKp0skZz55v+bNc/hHqtLYOUSiSxIoTwBYGKFl3B63fwEH97cnZ4ASHSppYLJ6Ev44wQAIL7l3BWY0pxSwCVqhjmhkm5yAs1fl+GJc9Aix6AzfAfgqsBLWFhTFs0v8f3xMxHG9tN/2H8kzkWpOGkEwVwoh9H1roKNiiVaR0ZHV1od91itjJVbVwY8svl8yLAVB2YOnKH3EhTWsEmNgtKhnGw9gw8iL4Lr/Y2/TH49PRmAaNGVedCZ5MiZkYcAe1sDdOgC3kw4mCHqWhnKs3C11BKtqkgvoNZc2wDvNIKZdySekjyFubHB6Fbu7TUYC6dM6T3y5nL9HfCBUfv7JkN2p4pQc5KiZZ9lgjGx+KlExzjkgpkxCoXe3wV/ZcgZ2sYPdok2uGGtI1yoQQAbECVnqDkAAWKHquuB0nlPifaZCPgu+zJUygzY083xoSyGRzJ5d4I86iHYI6tnFawySrPYWytyGTpgMyAvNXle6tqLsrTeY5yRazNmy1Oxec31hPo2PNHCj5o+lQi6zGc+GwyQ9HGeV1sV4lM3mGisRVcYLUkvBu3TGlOvDVbEsVvGTSuLwlhGmaQmL2xjaF1x7jniqcixD2akrilwF4exweUmA2LuECFjLtzlxcVqtZqQ0GJi7OJCOEcLnaNmd+Hln3vB5yQ73yYPGedqzCPrxjBmfr84esZ7Q0NtRdJ82i5JovRpG0emK2m41LaCFjpG4r8kpKVP6KHqezxiPeBvmuhkxrEX77peimGIolCN7kSJCm3SVmaicWGYDgZrKJn8p4UbITkPSNY9YiupyMGKlAp1b/w86ZUFDFR9PTCe1QzUC87aFtdWfa33DGi+Tp0zP9GEBswLrsCxHe4DAZoBIZd+DDhsq6luJEGs2236uis07bdfC4c0v4mi7+2tPuBM8BNitzYgVOuLwL9Ryt+awrS0S4SprQo2CyuKrIJXN9Ppa0jDhR2IYAO82yV2GUsLLbi0mAi1MJY4y1/E4LVe2Oit45aLCmboowSkQdKCWKgxi1spu1MS06QwpNklc2Nzwf9FSF/dTF9DQAENiAnc1v0XQ6MZMxGDtK0newmeCg2FsI8kdlLqcJNMXsL+aHoFuCu0CArnDC0Mn6bvhe8oMxQ8yDWbafWdc81gP7mwhTSELGaOfYG34XYvJ6NFho7X4ns13mwA2vi1JEUcoXccQdxsCqTjLApe3kFkFabDZO85/NnneGsEHdPn1uThu6eHA9LW3OnLrP1hafuLkdQ6Pl9DUl+OHb5txKGMlcI2Efw2mN5H3z08NdPalDrFuq2IDucLTMcP0DYaI9K2YwRXaiUq1+WH3xDRiwIVcT3vj5osyP8pA2ia4A/LAPqnaXvxgC2g083DI5t06LfRSVlvLrc966uIhTIDvXsL4d+//fx7E9tW1o7RZEmopB4gT5lOYcxuXPOTa0Q3s2kUgDaczHBubD9ja+2yG6We6msfilpGfAwXxYlc4DRLoahf2jESMedRN+wJJIgYwYEPBQ3tFy2EopwpSpN7rDYb1HPwobXwDovcDmCNxmsaopX1+Tf8eXd1Btd3V2As3Eyv7672Mc7R57FwP57ed/QZW2YcQxzL8P5S+mIu7ZVEi2UUrVCMVgumJSb1udcTsN6Vs38wZbjaCIW3XuhgxHcczQYxX0/wWhwWPQtHvabdXQBdKY+Tu9SUmm11iAk0rUVBauTuwWfsQmj6fDDa+C6SF1TsrV2opNR0EKL4URMH1k96S8VOLGH+6XSYbHwhgPeNLN9ILC7aJYR0G+OdSFKT50aHkjkIkxd5AJIZx0FDu1hRNOx3lE4MTZlUKOKD5OibRtZorZJz5egE26NUbzQTV+0hgyv9KqNlaFzhn4HHij1W7LFin1qxnRPJZ9tH2kOt4z5y3EeO+8i4ccd95LiPHPeRI7s5spvjPvIEbxwr9lix32XFju8jSZoJGnozXFgrqh1biX8ssCJbOl7Tj/YfTPu9Ffx8OPZ8O1kotJx4RW5A9Re/b36la7iQGs2CdP3eb/1+R1AVBAeQuERbNT9aTJGWKCcn/wYAAP//nnkfHg==" } diff --git a/testing/environments/snapshot.yml b/testing/environments/snapshot.yml index 3ee259b0400a..16c20266a6e6 100644 --- a/testing/environments/snapshot.yml +++ b/testing/environments/snapshot.yml @@ -17,7 +17,7 @@ services: - "indices.id_field_data.enabled=true" logstash: - image: docker.elastic.co/logstash/logstash:8.0.0-SNAPSHOT + image: docker.elastic.co/logstash/logstash@sha256:e01cf165142edf8d67485115b938c94deeda66153e9516aa2ce69ee417c5fc33 healthcheck: test: ["CMD", "curl", "-f", "http://localhost:9600/_node/stats"] retries: 600 diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/CONTRIBUTING.md b/vendor/github.com/eclipse/paho.mqtt.golang/CONTRIBUTING.md new file mode 100644 index 000000000000..9791dc60318d --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/CONTRIBUTING.md @@ -0,0 +1,56 @@ +Contributing to Paho +==================== + +Thanks for your interest in this project. + +Project description: +-------------------- + +The Paho project has been created to provide scalable open-source implementations of open and standard messaging protocols aimed at new, existing, and emerging applications for Machine-to-Machine (M2M) and Internet of Things (IoT). +Paho reflects the inherent physical and cost constraints of device connectivity. Its objectives include effective levels of decoupling between devices and applications, designed to keep markets open and encourage the rapid growth of scalable Web and Enterprise middleware and applications. Paho is being kicked off with MQTT publish/subscribe client implementations for use on embedded platforms, along with corresponding server support as determined by the community. + +- https://projects.eclipse.org/projects/technology.paho + +Developer resources: +-------------------- + +Information regarding source code management, builds, coding standards, and more. + +- https://projects.eclipse.org/projects/technology.paho/developer + +Contributor License Agreement: +------------------------------ + +Before your contribution can be accepted by the project, you need to create and electronically sign the Eclipse Foundation Contributor License Agreement (CLA). + +- http://www.eclipse.org/legal/CLA.php + +Contributing Code: +------------------ + +The Go client is developed in Github, see their documentation on the process of forking and pull requests; https://help.github.com/categories/collaborating-on-projects-using-pull-requests/ + +Git commit messages should follow the style described here; + +http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html + +Contact: +-------- + +Contact the project developers via the project's "dev" list. + +- https://dev.eclipse.org/mailman/listinfo/paho-dev + +Search for bugs: +---------------- + +This project uses Github issues to track ongoing development and issues. + +- https://github.com/eclipse/paho.mqtt.golang/issues + +Create a new bug: +----------------- + +Be sure to search for existing bugs before you create another one. Remember that contributions are always welcome! + +- https://github.com/eclipse/paho.mqtt.golang/issues diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/DISTRIBUTION b/vendor/github.com/eclipse/paho.mqtt.golang/DISTRIBUTION new file mode 100644 index 000000000000..34e49731daa6 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/DISTRIBUTION @@ -0,0 +1,15 @@ + + +Eclipse Distribution License - v 1.0 + +Copyright (c) 2007, Eclipse Foundation, Inc. and its licensors. + +All rights reserved. + +Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. + Neither the name of the Eclipse Foundation, Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/LICENSE b/vendor/github.com/eclipse/paho.mqtt.golang/LICENSE new file mode 100644 index 000000000000..aa7cc810fa19 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/LICENSE @@ -0,0 +1,87 @@ +Eclipse Public License - v 1.0 + +THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. + +1. DEFINITIONS + +"Contribution" means: + +a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and + +b) in the case of each subsequent Contributor: + +i) changes to the Program, and + +ii) additions to the Program; + +where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program. + +"Contributor" means any person or entity that distributes the Program. + +"Licensed Patents" mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. + +"Program" means the Contributions distributed in accordance with this Agreement. + +"Recipient" means anyone who receives the Program under this Agreement, including all Contributors. + +2. GRANT OF RIGHTS + +a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form. + +b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. + +c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. + +d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. + +3. REQUIREMENTS + +A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that: + +a) it complies with the terms and conditions of this Agreement; and + +b) its license agreement: + +i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; + +ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; + +iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and + +iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange. + +When the Program is made available in source code form: + +a) it must be made available under this Agreement; and + +b) a copy of this Agreement must be included with each copy of the Program. + +Contributors may not remove or alter any copyright notices contained within the Program. + +Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution. + +4. COMMERCIAL DISTRIBUTION + +Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. + +For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. + +5. NO WARRANTY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. + +6. DISCLAIMER OF LIABILITY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +7. GENERAL + +If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. + +If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. + +All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. + +Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. + +This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation. \ No newline at end of file diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/README.md b/vendor/github.com/eclipse/paho.mqtt.golang/README.md new file mode 100644 index 000000000000..8ba3f2039f81 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/README.md @@ -0,0 +1,71 @@ + +[![GoDoc](https://godoc.org/github.com/eclipse/paho.mqtt.golang?status.svg)](https://godoc.org/github.com/eclipse/paho.mqtt.golang) +[![Go Report Card](https://goreportcard.com/badge/github.com/eclipse/paho.mqtt.golang)](https://goreportcard.com/report/github.com/eclipse/paho.mqtt.golang) + +Eclipse Paho MQTT Go client +=========================== + + +This repository contains the source code for the [Eclipse Paho](http://eclipse.org/paho) MQTT Go client library. + +This code builds a library which enable applications to connect to an [MQTT](http://mqtt.org) broker to publish messages, and to subscribe to topics and receive published messages. + +This library supports a fully asynchronous mode of operation. + + +Installation and Build +---------------------- + +This client is designed to work with the standard Go tools, so installation is as easy as: + +``` +go get github.com/eclipse/paho.mqtt.golang +``` + +The client depends on Google's [proxy](https://godoc.org/golang.org/x/net/proxy) package and the [websockets](https://godoc.org/github.com/gorilla/websocket) package, +also easily installed with the commands: + +``` +go get github.com/gorilla/websocket +go get golang.org/x/net/proxy +``` + + +Usage and API +------------- + +Detailed API documentation is available by using to godoc tool, or can be browsed online +using the [godoc.org](http://godoc.org/github.com/eclipse/paho.mqtt.golang) service. + +Make use of the library by importing it in your Go client source code. For example, +``` +import "github.com/eclipse/paho.mqtt.golang" +``` + +Samples are available in the `cmd` directory for reference. + +Note: + +The library also supports using MQTT over websockets by using the `ws://` (unsecure) or `wss://` (secure) prefix in the URI. If the client is running behind a corporate http/https proxy then the following environment variables `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` are taken into account when establishing the connection. + + +Runtime tracing +--------------- + +Tracing is enabled by assigning logs (from the Go log package) to the logging endpoints, ERROR, CRITICAL, WARN and DEBUG + + +Reporting bugs +-------------- + +Please report bugs by raising issues for this project in github https://github.com/eclipse/paho.mqtt.golang/issues + + +More information +---------------- + +Discussion of the Paho clients takes place on the [Eclipse paho-dev mailing list](https://dev.eclipse.org/mailman/listinfo/paho-dev). + +General questions about the MQTT protocol are discussed in the [MQTT Google Group](https://groups.google.com/forum/?hl=en-US&fromgroups#!forum/mqtt). + +There is much more information available via the [MQTT community site](http://mqtt.org). diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/about.html b/vendor/github.com/eclipse/paho.mqtt.golang/about.html new file mode 100644 index 000000000000..b183f417abb5 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/about.html @@ -0,0 +1,41 @@ + + + +About + + +

About This Content

+ +

December 9, 2013

+

License

+ +

The Eclipse Foundation makes available all content in this plug-in ("Content"). Unless otherwise +indicated below, the Content is provided to you under the terms and conditions of the +Eclipse Public License Version 1.0 ("EPL") and Eclipse Distribution License Version 1.0 ("EDL"). +A copy of the EPL is available at +http://www.eclipse.org/legal/epl-v10.html +and a copy of the EDL is available at +http://www.eclipse.org/org/documents/edl-v10.php. +For purposes of the EPL, "Program" will mean the Content.

+ +

If you did not receive this Content directly from the Eclipse Foundation, the Content is +being redistributed by another party ("Redistributor") and different terms and conditions may +apply to your use of any object code in the Content. Check the Redistributor's license that was +provided with the Content. If no such license exists, contact the Redistributor. Unless otherwise +indicated below, the terms and conditions of the EPL still apply to any source code in the Content +and such source code may be obtained at http://www.eclipse.org.

+ + +

Third Party Content

+

The Content includes items that have been sourced from third parties as set out below. If you + did not receive this Content directly from the Eclipse Foundation, the following is provided + for informational purposes only, and you should look to the Redistributor's license for + terms and conditions of use.

+

+ None

+

+

+ + + + diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/client.go b/vendor/github.com/eclipse/paho.mqtt.golang/client.go new file mode 100644 index 000000000000..a8743a238450 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/client.go @@ -0,0 +1,965 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +// Portions copyright © 2018 TIBCO Software Inc. + +// Package mqtt provides an MQTT v3.1.1 client library. +package mqtt + +import ( + "bytes" + "errors" + "fmt" + "net" + "strings" + "sync" + "sync/atomic" + "time" + + "github.com/eclipse/paho.mqtt.golang/packets" +) + +const ( + disconnected uint32 = iota + connecting + reconnecting + connected +) + +// Client is the interface definition for a Client as used by this +// library, the interface is primarily to allow mocking tests. +// +// It is an MQTT v3.1.1 client for communicating +// with an MQTT server using non-blocking methods that allow work +// to be done in the background. +// An application may connect to an MQTT server using: +// A plain TCP socket +// A secure SSL/TLS socket +// A websocket +// To enable ensured message delivery at Quality of Service (QoS) levels +// described in the MQTT spec, a message persistence mechanism must be +// used. This is done by providing a type which implements the Store +// interface. For convenience, FileStore and MemoryStore are provided +// implementations that should be sufficient for most use cases. More +// information can be found in their respective documentation. +// Numerous connection options may be specified by configuring a +// and then supplying a ClientOptions type. +type Client interface { + // IsConnected returns a bool signifying whether + // the client is connected or not. + IsConnected() bool + // IsConnectionOpen return a bool signifying whether the client has an active + // connection to mqtt broker, i.e not in disconnected or reconnect mode + IsConnectionOpen() bool + // Connect will create a connection to the message broker, by default + // it will attempt to connect at v3.1.1 and auto retry at v3.1 if that + // fails + Connect() Token + // Disconnect will end the connection with the server, but not before waiting + // the specified number of milliseconds to wait for existing work to be + // completed. + Disconnect(quiesce uint) + // Publish will publish a message with the specified QoS and content + // to the specified topic. + // Returns a token to track delivery of the message to the broker + Publish(topic string, qos byte, retained bool, payload interface{}) Token + // Subscribe starts a new subscription. Provide a MessageHandler to be executed when + // a message is published on the topic provided, or nil for the default handler + Subscribe(topic string, qos byte, callback MessageHandler) Token + // SubscribeMultiple starts a new subscription for multiple topics. Provide a MessageHandler to + // be executed when a message is published on one of the topics provided, or nil for the + // default handler + SubscribeMultiple(filters map[string]byte, callback MessageHandler) Token + // Unsubscribe will end the subscription from each of the topics provided. + // Messages published to those topics from other clients will no longer be + // received. + Unsubscribe(topics ...string) Token + // AddRoute allows you to add a handler for messages on a specific topic + // without making a subscription. For example having a different handler + // for parts of a wildcard subscription + AddRoute(topic string, callback MessageHandler) + // OptionsReader returns a ClientOptionsReader which is a copy of the clientoptions + // in use by the client. + OptionsReader() ClientOptionsReader +} + +// client implements the Client interface +type client struct { + lastSent atomic.Value + lastReceived atomic.Value + pingOutstanding int32 + status uint32 + sync.RWMutex + messageIds + conn net.Conn + ibound chan packets.ControlPacket + obound chan *PacketAndToken + oboundP chan *PacketAndToken + msgRouter *router + stopRouter chan bool + incomingPubChan chan *packets.PublishPacket + errors chan error + stop chan struct{} + persist Store + options ClientOptions + optionsMu sync.Mutex // Protects the options in a few limited cases where needed for testing + workers sync.WaitGroup +} + +// NewClient will create an MQTT v3.1.1 client with all of the options specified +// in the provided ClientOptions. The client must have the Connect method called +// on it before it may be used. This is to make sure resources (such as a net +// connection) are created before the application is actually ready. +func NewClient(o *ClientOptions) Client { + c := &client{} + c.options = *o + + if c.options.Store == nil { + c.options.Store = NewMemoryStore() + } + switch c.options.ProtocolVersion { + case 3, 4: + c.options.protocolVersionExplicit = true + case 0x83, 0x84: + c.options.protocolVersionExplicit = true + default: + c.options.ProtocolVersion = 4 + c.options.protocolVersionExplicit = false + } + c.persist = c.options.Store + c.status = disconnected + c.messageIds = messageIds{index: make(map[uint16]tokenCompletor)} + c.msgRouter, c.stopRouter = newRouter() + c.msgRouter.setDefaultHandler(c.options.DefaultPublishHandler) + return c +} + +// AddRoute allows you to add a handler for messages on a specific topic +// without making a subscription. For example having a different handler +// for parts of a wildcard subscription +func (c *client) AddRoute(topic string, callback MessageHandler) { + if callback != nil { + c.msgRouter.addRoute(topic, callback) + } +} + +// IsConnected returns a bool signifying whether +// the client is connected or not. +// connected means that the connection is up now OR it will +// be established/reestablished automatically when possible +func (c *client) IsConnected() bool { + c.RLock() + defer c.RUnlock() + status := atomic.LoadUint32(&c.status) + switch { + case status == connected: + return true + case c.options.AutoReconnect && status > connecting: + return true + case c.options.ConnectRetry && status == connecting: + return true + default: + return false + } +} + +// IsConnectionOpen return a bool signifying whether the client has an active +// connection to mqtt broker, i.e not in disconnected or reconnect mode +func (c *client) IsConnectionOpen() bool { + c.RLock() + defer c.RUnlock() + status := atomic.LoadUint32(&c.status) + switch { + case status == connected: + return true + default: + return false + } +} + +func (c *client) connectionStatus() uint32 { + c.RLock() + defer c.RUnlock() + status := atomic.LoadUint32(&c.status) + return status +} + +func (c *client) setConnected(status uint32) { + c.Lock() + defer c.Unlock() + atomic.StoreUint32(&c.status, uint32(status)) +} + +//ErrNotConnected is the error returned from function calls that are +//made when the client is not connected to a broker +var ErrNotConnected = errors.New("Not Connected") + +// Connect will create a connection to the message broker, by default +// it will attempt to connect at v3.1.1 and auto retry at v3.1 if that +// fails +func (c *client) Connect() Token { + var err error + t := newToken(packets.Connect).(*ConnectToken) + DEBUG.Println(CLI, "Connect()") + + if c.options.ConnectRetry && atomic.LoadUint32(&c.status) != disconnected { + // if in any state other than disconnected and ConnectRetry is + // enabled then the connection will come up automatically + // client can assume connection is up + WARN.Println(CLI, "Connect() called but not disconnected") + t.returnCode = packets.Accepted + t.flowComplete() + return t + } + + c.obound = make(chan *PacketAndToken) + c.oboundP = make(chan *PacketAndToken) + c.ibound = make(chan packets.ControlPacket) + + c.persist.Open() + if c.options.ConnectRetry { + c.reserveStoredPublishIDs() // Reserve IDs to allow publish before connect complete + } + c.setConnected(connecting) + + go func() { + c.errors = make(chan error, 1) + c.stop = make(chan struct{}) + + var rc byte + protocolVersion := c.options.ProtocolVersion + + if len(c.options.Servers) == 0 { + t.setError(fmt.Errorf("No servers defined to connect to")) + return + } + + RETRYCONN: + c.optionsMu.Lock() // Protect c.options.Servers so that servers can be added in test cases + brokers := c.options.Servers + c.optionsMu.Unlock() + + for _, broker := range brokers { + cm := newConnectMsgFromOptions(&c.options, broker) + c.options.ProtocolVersion = protocolVersion + CONN: + DEBUG.Println(CLI, "about to write new connect msg") + c.Lock() + c.conn, err = openConnection(broker, c.options.TLSConfig, c.options.ConnectTimeout, + c.options.HTTPHeaders) + c.Unlock() + if err == nil { + DEBUG.Println(CLI, "socket connected to broker") + switch c.options.ProtocolVersion { + case 3: + DEBUG.Println(CLI, "Using MQTT 3.1 protocol") + cm.ProtocolName = "MQIsdp" + cm.ProtocolVersion = 3 + case 0x83: + DEBUG.Println(CLI, "Using MQTT 3.1b protocol") + cm.ProtocolName = "MQIsdp" + cm.ProtocolVersion = 0x83 + case 0x84: + DEBUG.Println(CLI, "Using MQTT 3.1.1b protocol") + cm.ProtocolName = "MQTT" + cm.ProtocolVersion = 0x84 + default: + DEBUG.Println(CLI, "Using MQTT 3.1.1 protocol") + c.options.ProtocolVersion = 4 + cm.ProtocolName = "MQTT" + cm.ProtocolVersion = 4 + } + cm.Write(c.conn) + + rc, t.sessionPresent = c.connect() + if rc != packets.Accepted { + c.Lock() + if c.conn != nil { + c.conn.Close() + c.conn = nil + } + c.Unlock() + //if the protocol version was explicitly set don't do any fallback + if c.options.protocolVersionExplicit { + ERROR.Println(CLI, "Connecting to", broker, "CONNACK was not CONN_ACCEPTED, but rather", packets.ConnackReturnCodes[rc]) + continue + } + if c.options.ProtocolVersion == 4 { + DEBUG.Println(CLI, "Trying reconnect using MQTT 3.1 protocol") + c.options.ProtocolVersion = 3 + goto CONN + } + } + break + } else { + ERROR.Println(CLI, err.Error()) + WARN.Println(CLI, "failed to connect to broker, trying next") + rc = packets.ErrNetworkError + } + } + + if c.conn == nil { + if c.options.ConnectRetry { + DEBUG.Println(CLI, "Connect failed, sleeping for", int(c.options.ConnectRetryInterval.Seconds()), "seconds and will then retry") + time.Sleep(c.options.ConnectRetryInterval) + + if atomic.LoadUint32(&c.status) == connecting { + goto RETRYCONN + } + } + ERROR.Println(CLI, "Failed to connect to a broker") + c.setConnected(disconnected) + c.persist.Close() + t.returnCode = rc + if rc != packets.ErrNetworkError { + t.setError(packets.ConnErrors[rc]) + } else { + t.setError(fmt.Errorf("%s : %s", packets.ConnErrors[rc], err)) + } + return + } + + c.options.protocolVersionExplicit = true + + if c.options.KeepAlive != 0 { + atomic.StoreInt32(&c.pingOutstanding, 0) + c.lastReceived.Store(time.Now()) + c.lastSent.Store(time.Now()) + c.workers.Add(1) + go keepalive(c) + } + + c.incomingPubChan = make(chan *packets.PublishPacket) + c.msgRouter.matchAndDispatch(c.incomingPubChan, c.options.Order, c) + + c.setConnected(connected) + DEBUG.Println(CLI, "client is connected") + if c.options.OnConnect != nil { + go c.options.OnConnect(c) + } + + c.workers.Add(4) + go errorWatch(c) + go alllogic(c) + go outgoing(c) + go incoming(c) + + // Take care of any messages in the store + if !c.options.CleanSession { + c.workers.Add(1) // disconnect during resume can lead to reconnect being called before resume completes + c.resume(c.options.ResumeSubs) + } else { + c.persist.Reset() + } + + DEBUG.Println(CLI, "exit startClient") + t.flowComplete() + }() + return t +} + +// internal function used to reconnect the client when it loses its connection +func (c *client) reconnect() { + DEBUG.Println(CLI, "enter reconnect") + var ( + err error + + rc = byte(1) + sleep = time.Duration(1 * time.Second) + ) + + for rc != 0 && atomic.LoadUint32(&c.status) != disconnected { + if nil != c.options.OnReconnecting { + c.options.OnReconnecting(c, &c.options) + } + c.optionsMu.Lock() // Protect c.options.Servers so that servers can be added in test cases + brokers := c.options.Servers + c.optionsMu.Unlock() + for _, broker := range brokers { + cm := newConnectMsgFromOptions(&c.options, broker) + DEBUG.Println(CLI, "about to write new connect msg") + c.Lock() + c.conn, err = openConnection(broker, c.options.TLSConfig, c.options.ConnectTimeout, c.options.HTTPHeaders) + c.Unlock() + if err == nil { + DEBUG.Println(CLI, "socket connected to broker") + switch c.options.ProtocolVersion { + case 0x83: + DEBUG.Println(CLI, "Using MQTT 3.1b protocol") + cm.ProtocolName = "MQIsdp" + cm.ProtocolVersion = 0x83 + case 0x84: + DEBUG.Println(CLI, "Using MQTT 3.1.1b protocol") + cm.ProtocolName = "MQTT" + cm.ProtocolVersion = 0x84 + case 3: + DEBUG.Println(CLI, "Using MQTT 3.1 protocol") + cm.ProtocolName = "MQIsdp" + cm.ProtocolVersion = 3 + default: + DEBUG.Println(CLI, "Using MQTT 3.1.1 protocol") + cm.ProtocolName = "MQTT" + cm.ProtocolVersion = 4 + } + cm.Write(c.conn) + + rc, _ = c.connect() + if rc != packets.Accepted { + if c.conn != nil { + c.conn.Close() + c.conn = nil + } + //if the protocol version was explicitly set don't do any fallback + if c.options.protocolVersionExplicit { + ERROR.Println(CLI, "Connecting to", broker, "CONNACK was not Accepted, but rather", packets.ConnackReturnCodes[rc]) + continue + } + } + break + } else { + ERROR.Println(CLI, err.Error()) + WARN.Println(CLI, "failed to connect to broker, trying next") + rc = packets.ErrNetworkError + } + } + if rc != 0 { + DEBUG.Println(CLI, "Reconnect failed, sleeping for", int(sleep.Seconds()), "seconds") + time.Sleep(sleep) + if sleep < c.options.MaxReconnectInterval { + sleep *= 2 + } + + if sleep > c.options.MaxReconnectInterval { + sleep = c.options.MaxReconnectInterval + } + } + } + // Disconnect() must have been called while we were trying to reconnect. + if c.connectionStatus() == disconnected { + DEBUG.Println(CLI, "Client moved to disconnected state while reconnecting, abandoning reconnect") + return + } + + c.stop = make(chan struct{}) + + if c.options.KeepAlive != 0 { + atomic.StoreInt32(&c.pingOutstanding, 0) + c.lastReceived.Store(time.Now()) + c.lastSent.Store(time.Now()) + c.workers.Add(1) + go keepalive(c) + } + + c.setConnected(connected) + DEBUG.Println(CLI, "client is reconnected") + if c.options.OnConnect != nil { + go c.options.OnConnect(c) + } + + c.workers.Add(4) + go errorWatch(c) + go alllogic(c) + go outgoing(c) + go incoming(c) + + c.workers.Add(1) // disconnect during resume can lead to reconnect being called before resume completes + c.resume(c.options.ResumeSubs) +} + +// This function is only used for receiving a connack +// when the connection is first started. +// This prevents receiving incoming data while resume +// is in progress if clean session is false. +func (c *client) connect() (byte, bool) { + DEBUG.Println(NET, "connect started") + + ca, err := packets.ReadPacket(c.conn) + if err != nil { + ERROR.Println(NET, "connect got error", err) + return packets.ErrNetworkError, false + } + if ca == nil { + ERROR.Println(NET, "received nil packet") + return packets.ErrNetworkError, false + } + + msg, ok := ca.(*packets.ConnackPacket) + if !ok { + ERROR.Println(NET, "received msg that was not CONNACK") + return packets.ErrNetworkError, false + } + + DEBUG.Println(NET, "received connack") + return msg.ReturnCode, msg.SessionPresent +} + +// Disconnect will end the connection with the server, but not before waiting +// the specified number of milliseconds to wait for existing work to be +// completed. +func (c *client) Disconnect(quiesce uint) { + status := atomic.LoadUint32(&c.status) + if status == connected { + DEBUG.Println(CLI, "disconnecting") + c.setConnected(disconnected) + + dm := packets.NewControlPacket(packets.Disconnect).(*packets.DisconnectPacket) + dt := newToken(packets.Disconnect) + c.oboundP <- &PacketAndToken{p: dm, t: dt} + + // wait for work to finish, or quiesce time consumed + dt.WaitTimeout(time.Duration(quiesce) * time.Millisecond) + } else { + WARN.Println(CLI, "Disconnect() called but not connected (disconnected/reconnecting)") + c.setConnected(disconnected) + } + + c.disconnect() +} + +// ForceDisconnect will end the connection with the mqtt broker immediately. +func (c *client) forceDisconnect() { + if !c.IsConnected() { + WARN.Println(CLI, "already disconnected") + return + } + c.setConnected(disconnected) + c.conn.Close() + DEBUG.Println(CLI, "forcefully disconnecting") + c.disconnect() +} + +func (c *client) internalConnLost(err error) { + // Only do anything if this was called and we are still "connected" + // forceDisconnect can cause incoming/outgoing/alllogic to end with + // error from closing the socket but state will be "disconnected" + if c.IsConnected() { + c.closeStop() + c.conn.Close() + c.workers.Wait() + if c.options.CleanSession && !c.options.AutoReconnect { + c.messageIds.cleanUp() + } + if c.options.AutoReconnect { + c.setConnected(reconnecting) + go c.reconnect() + } else { + c.setConnected(disconnected) + } + if c.options.OnConnectionLost != nil { + go c.options.OnConnectionLost(c, err) + } + } +} + +func (c *client) closeStop() { + c.Lock() + defer c.Unlock() + select { + case <-c.stop: + DEBUG.Println("In disconnect and stop channel is already closed") + default: + if c.stop != nil { + close(c.stop) + } + } +} + +func (c *client) closeStopRouter() { + c.Lock() + defer c.Unlock() + select { + case <-c.stopRouter: + DEBUG.Println("In disconnect and stop channel is already closed") + default: + if c.stopRouter != nil { + close(c.stopRouter) + } + } +} + +func (c *client) closeConn() { + c.Lock() + defer c.Unlock() + if c.conn != nil { + c.conn.Close() + } +} + +func (c *client) disconnect() { + c.closeStop() + c.closeConn() + c.workers.Wait() + c.messageIds.cleanUp() + c.closeStopRouter() + DEBUG.Println(CLI, "disconnected") + c.persist.Close() +} + +// Publish will publish a message with the specified QoS and content +// to the specified topic. +// Returns a token to track delivery of the message to the broker +func (c *client) Publish(topic string, qos byte, retained bool, payload interface{}) Token { + token := newToken(packets.Publish).(*PublishToken) + DEBUG.Println(CLI, "enter Publish") + switch { + case !c.IsConnected(): + token.setError(ErrNotConnected) + return token + case c.connectionStatus() == reconnecting && qos == 0: + token.flowComplete() + return token + } + pub := packets.NewControlPacket(packets.Publish).(*packets.PublishPacket) + pub.Qos = qos + pub.TopicName = topic + pub.Retain = retained + switch p := payload.(type) { + case string: + pub.Payload = []byte(p) + case []byte: + pub.Payload = p + case bytes.Buffer: + pub.Payload = p.Bytes() + default: + token.setError(fmt.Errorf("Unknown payload type")) + return token + } + + if pub.Qos != 0 && pub.MessageID == 0 { + pub.MessageID = c.getID(token) + token.messageID = pub.MessageID + } + persistOutbound(c.persist, pub) + switch c.connectionStatus() { + case connecting: + DEBUG.Println(CLI, "storing publish message (connecting), topic:", topic) + case reconnecting: + DEBUG.Println(CLI, "storing publish message (reconnecting), topic:", topic) + default: + DEBUG.Println(CLI, "sending publish message, topic:", topic) + publishWaitTimeout := c.options.WriteTimeout + if publishWaitTimeout == 0 { + publishWaitTimeout = time.Second * 30 + } + select { + case c.obound <- &PacketAndToken{p: pub, t: token}: + case <-time.After(publishWaitTimeout): + token.setError(errors.New("publish was broken by timeout")) + } + } + return token +} + +// Subscribe starts a new subscription. Provide a MessageHandler to be executed when +// a message is published on the topic provided. +func (c *client) Subscribe(topic string, qos byte, callback MessageHandler) Token { + token := newToken(packets.Subscribe).(*SubscribeToken) + DEBUG.Println(CLI, "enter Subscribe") + if !c.IsConnected() { + token.setError(ErrNotConnected) + return token + } + if !c.IsConnectionOpen() { + switch { + case !c.options.ResumeSubs: + // if not connected and resumesubs not set this sub will be thrown away + token.setError(fmt.Errorf("not currently connected and ResumeSubs not set")) + return token + case c.options.CleanSession && c.connectionStatus() == reconnecting: + // if reconnecting and cleansession is true this sub will be thrown away + token.setError(fmt.Errorf("reconnecting state and cleansession is true")) + return token + } + } + sub := packets.NewControlPacket(packets.Subscribe).(*packets.SubscribePacket) + if err := validateTopicAndQos(topic, qos); err != nil { + token.setError(err) + return token + } + sub.Topics = append(sub.Topics, topic) + sub.Qoss = append(sub.Qoss, qos) + + if strings.HasPrefix(topic, "$share/") { + topic = strings.Join(strings.Split(topic, "/")[2:], "/") + } + + if strings.HasPrefix(topic, "$queue/") { + topic = strings.TrimPrefix(topic, "$queue/") + } + + if callback != nil { + c.msgRouter.addRoute(topic, callback) + } + + token.subs = append(token.subs, topic) + + if sub.MessageID == 0 { + sub.MessageID = c.getID(token) + token.messageID = sub.MessageID + } + DEBUG.Println(CLI, sub.String()) + + persistOutbound(c.persist, sub) + switch c.connectionStatus() { + case connecting: + DEBUG.Println(CLI, "storing subscribe message (connecting), topic:", topic) + case reconnecting: + DEBUG.Println(CLI, "storing subscribe message (reconnecting), topic:", topic) + default: + DEBUG.Println(CLI, "sending subscribe message, topic:", topic) + subscribeWaitTimeout := c.options.WriteTimeout + if subscribeWaitTimeout == 0 { + subscribeWaitTimeout = time.Second * 30 + } + select { + case c.oboundP <- &PacketAndToken{p: sub, t: token}: + case <-time.After(subscribeWaitTimeout): + token.setError(errors.New("subscribe was broken by timeout")) + } + } + DEBUG.Println(CLI, "exit Subscribe") + return token +} + +// SubscribeMultiple starts a new subscription for multiple topics. Provide a MessageHandler to +// be executed when a message is published on one of the topics provided. +func (c *client) SubscribeMultiple(filters map[string]byte, callback MessageHandler) Token { + var err error + token := newToken(packets.Subscribe).(*SubscribeToken) + DEBUG.Println(CLI, "enter SubscribeMultiple") + if !c.IsConnected() { + token.setError(ErrNotConnected) + return token + } + if !c.IsConnectionOpen() { + switch { + case !c.options.ResumeSubs: + // if not connected and resumesubs not set this sub will be thrown away + token.setError(fmt.Errorf("not currently connected and ResumeSubs not set")) + return token + case c.options.CleanSession && c.connectionStatus() == reconnecting: + // if reconnecting and cleansession is true this sub will be thrown away + token.setError(fmt.Errorf("reconnecting state and cleansession is true")) + return token + } + } + sub := packets.NewControlPacket(packets.Subscribe).(*packets.SubscribePacket) + if sub.Topics, sub.Qoss, err = validateSubscribeMap(filters); err != nil { + token.setError(err) + return token + } + + if callback != nil { + for topic := range filters { + c.msgRouter.addRoute(topic, callback) + } + } + token.subs = make([]string, len(sub.Topics)) + copy(token.subs, sub.Topics) + + if sub.MessageID == 0 { + sub.MessageID = c.getID(token) + token.messageID = sub.MessageID + } + persistOutbound(c.persist, sub) + switch c.connectionStatus() { + case connecting: + DEBUG.Println(CLI, "storing subscribe message (connecting), topics:", sub.Topics) + case reconnecting: + DEBUG.Println(CLI, "storing subscribe message (reconnecting), topics:", sub.Topics) + default: + DEBUG.Println(CLI, "sending subscribe message, topics:", sub.Topics) + subscribeWaitTimeout := c.options.WriteTimeout + if subscribeWaitTimeout == 0 { + subscribeWaitTimeout = time.Second * 30 + } + select { + case c.oboundP <- &PacketAndToken{p: sub, t: token}: + case <-time.After(subscribeWaitTimeout): + token.setError(errors.New("subscribe was broken by timeout")) + } + } + DEBUG.Println(CLI, "exit SubscribeMultiple") + return token +} + +// reserveStoredPublishIDs reserves the ids for publish packets in the persistent store to ensure these are not duplicated +func (c *client) reserveStoredPublishIDs() { + // The resume function sets the stored id for publish packets only (some other packets + // will get new ids in net code). This means that the only keys we need to ensure are + // unique are the publish ones (and these will completed/replaced in resume() ) + if !c.options.CleanSession { + storedKeys := c.persist.All() + for _, key := range storedKeys { + packet := c.persist.Get(key) + if packet == nil { + continue + } + switch packet.(type) { + case *packets.PublishPacket: + details := packet.Details() + token := &PlaceHolderToken{id: details.MessageID} + c.claimID(token, details.MessageID) + } + } + } +} + +// Load all stored messages and resend them +// Call this to ensure QOS > 1,2 even after an application crash +func (c *client) resume(subscription bool) { + defer c.workers.Done() // resume must complete before any attempt to reconnect is made + + storedKeys := c.persist.All() + for _, key := range storedKeys { + packet := c.persist.Get(key) + if packet == nil { + continue + } + details := packet.Details() + if isKeyOutbound(key) { + switch packet.(type) { + case *packets.SubscribePacket: + if subscription { + DEBUG.Println(STR, fmt.Sprintf("loaded pending subscribe (%d)", details.MessageID)) + subPacket := packet.(*packets.SubscribePacket) + token := newToken(packets.Subscribe).(*SubscribeToken) + token.messageID = details.MessageID + token.subs = append(token.subs, subPacket.Topics...) + c.claimID(token, details.MessageID) + select { + case c.oboundP <- &PacketAndToken{p: packet, t: token}: + case <-c.stop: + return + } + } + case *packets.UnsubscribePacket: + if subscription { + DEBUG.Println(STR, fmt.Sprintf("loaded pending unsubscribe (%d)", details.MessageID)) + token := newToken(packets.Unsubscribe).(*UnsubscribeToken) + select { + case c.oboundP <- &PacketAndToken{p: packet, t: token}: + case <-c.stop: + return + } + } + case *packets.PubrelPacket: + DEBUG.Println(STR, fmt.Sprintf("loaded pending pubrel (%d)", details.MessageID)) + select { + case c.oboundP <- &PacketAndToken{p: packet, t: nil}: + case <-c.stop: + return + } + case *packets.PublishPacket: + token := newToken(packets.Publish).(*PublishToken) + token.messageID = details.MessageID + c.claimID(token, details.MessageID) + DEBUG.Println(STR, fmt.Sprintf("loaded pending publish (%d)", details.MessageID)) + DEBUG.Println(STR, details) + select { + case c.obound <- &PacketAndToken{p: packet, t: token}: + case <-c.stop: + return + } + default: + ERROR.Println(STR, "invalid message type in store (discarded)") + c.persist.Del(key) + } + } else { + switch packet.(type) { + case *packets.PubrelPacket: + DEBUG.Println(STR, fmt.Sprintf("loaded pending incomming (%d)", details.MessageID)) + select { + case c.ibound <- packet: + case <-c.stop: + return + } + default: + ERROR.Println(STR, "invalid message type in store (discarded)") + c.persist.Del(key) + } + } + } +} + +// Unsubscribe will end the subscription from each of the topics provided. +// Messages published to those topics from other clients will no longer be +// received. +func (c *client) Unsubscribe(topics ...string) Token { + token := newToken(packets.Unsubscribe).(*UnsubscribeToken) + DEBUG.Println(CLI, "enter Unsubscribe") + if !c.IsConnected() { + token.setError(ErrNotConnected) + return token + } + if !c.IsConnectionOpen() { + switch { + case !c.options.ResumeSubs: + // if not connected and resumesubs not set this unsub will be thrown away + token.setError(fmt.Errorf("not currently connected and ResumeSubs not set")) + return token + case c.options.CleanSession && c.connectionStatus() == reconnecting: + // if reconnecting and cleansession is true this unsub will be thrown away + token.setError(fmt.Errorf("reconnecting state and cleansession is true")) + return token + } + } + unsub := packets.NewControlPacket(packets.Unsubscribe).(*packets.UnsubscribePacket) + unsub.Topics = make([]string, len(topics)) + copy(unsub.Topics, topics) + + if unsub.MessageID == 0 { + unsub.MessageID = c.getID(token) + token.messageID = unsub.MessageID + } + + persistOutbound(c.persist, unsub) + + switch c.connectionStatus() { + case connecting: + DEBUG.Println(CLI, "storing unsubscribe message (connecting), topics:", topics) + case reconnecting: + DEBUG.Println(CLI, "storing unsubscribe message (reconnecting), topics:", topics) + default: + DEBUG.Println(CLI, "sending unsubscribe message, topics:", topics) + subscribeWaitTimeout := c.options.WriteTimeout + if subscribeWaitTimeout == 0 { + subscribeWaitTimeout = time.Second * 30 + } + select { + case c.oboundP <- &PacketAndToken{p: unsub, t: token}: + for _, topic := range topics { + c.msgRouter.deleteRoute(topic) + } + case <-time.After(subscribeWaitTimeout): + token.setError(errors.New("unsubscribe was broken by timeout")) + } + } + + DEBUG.Println(CLI, "exit Unsubscribe") + return token +} + +// OptionsReader returns a ClientOptionsReader which is a copy of the clientoptions +// in use by the client. +func (c *client) OptionsReader() ClientOptionsReader { + r := ClientOptionsReader{options: &c.options} + return r +} + +//DefaultConnectionLostHandler is a definition of a function that simply +//reports to the DEBUG log the reason for the client losing a connection. +func DefaultConnectionLostHandler(client Client, reason error) { + DEBUG.Println("Connection lost:", reason.Error()) +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/components.go b/vendor/github.com/eclipse/paho.mqtt.golang/components.go new file mode 100644 index 000000000000..01f5fafdf8f6 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/components.go @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +type component string + +// Component names for debug output +const ( + NET component = "[net] " + PNG component = "[pinger] " + CLI component = "[client] " + DEC component = "[decode] " + MES component = "[message] " + STR component = "[store] " + MID component = "[msgids] " + TST component = "[test] " + STA component = "[state] " + ERR component = "[error] " +) diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/edl-v10 b/vendor/github.com/eclipse/paho.mqtt.golang/edl-v10 new file mode 100644 index 000000000000..cf989f1456b9 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/edl-v10 @@ -0,0 +1,15 @@ + +Eclipse Distribution License - v 1.0 + +Copyright (c) 2007, Eclipse Foundation, Inc. and its licensors. + +All rights reserved. + +Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. + Neither the name of the Eclipse Foundation, Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/epl-v10 b/vendor/github.com/eclipse/paho.mqtt.golang/epl-v10 new file mode 100644 index 000000000000..79e486c3d2c2 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/epl-v10 @@ -0,0 +1,70 @@ +Eclipse Public License - v 1.0 + +THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. + +1. DEFINITIONS + +"Contribution" means: + +a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and +b) in the case of each subsequent Contributor: +i) changes to the Program, and +ii) additions to the Program; +where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program. +"Contributor" means any person or entity that distributes the Program. + +"Licensed Patents" mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. + +"Program" means the Contributions distributed in accordance with this Agreement. + +"Recipient" means anyone who receives the Program under this Agreement, including all Contributors. + +2. GRANT OF RIGHTS + +a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form. +b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. +c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. +d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. +3. REQUIREMENTS + +A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that: + +a) it complies with the terms and conditions of this Agreement; and +b) its license agreement: +i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; +ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; +iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and +iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange. +When the Program is made available in source code form: + +a) it must be made available under this Agreement; and +b) a copy of this Agreement must be included with each copy of the Program. +Contributors may not remove or alter any copyright notices contained within the Program. + +Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution. + +4. COMMERCIAL DISTRIBUTION + +Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. + +For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. + +5. NO WARRANTY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. + +6. DISCLAIMER OF LIABILITY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +7. GENERAL + +If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. + +If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. + +All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. + +Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. + +This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation. diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/filestore.go b/vendor/github.com/eclipse/paho.mqtt.golang/filestore.go new file mode 100644 index 000000000000..d15f6146c2e7 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/filestore.go @@ -0,0 +1,255 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "io/ioutil" + "os" + "path" + "sort" + "sync" + + "github.com/eclipse/paho.mqtt.golang/packets" +) + +const ( + msgExt = ".msg" + tmpExt = ".tmp" + corruptExt = ".CORRUPT" +) + +// FileStore implements the store interface using the filesystem to provide +// true persistence, even across client failure. This is designed to use a +// single directory per running client. If you are running multiple clients +// on the same filesystem, you will need to be careful to specify unique +// store directories for each. +type FileStore struct { + sync.RWMutex + directory string + opened bool +} + +// NewFileStore will create a new FileStore which stores its messages in the +// directory provided. +func NewFileStore(directory string) *FileStore { + store := &FileStore{ + directory: directory, + opened: false, + } + return store +} + +// Open will allow the FileStore to be used. +func (store *FileStore) Open() { + store.Lock() + defer store.Unlock() + // if no store directory was specified in ClientOpts, by default use the + // current working directory + if store.directory == "" { + store.directory, _ = os.Getwd() + } + + // if store dir exists, great, otherwise, create it + if !exists(store.directory) { + perms := os.FileMode(0770) + merr := os.MkdirAll(store.directory, perms) + chkerr(merr) + } + store.opened = true + DEBUG.Println(STR, "store is opened at", store.directory) +} + +// Close will disallow the FileStore from being used. +func (store *FileStore) Close() { + store.Lock() + defer store.Unlock() + store.opened = false + DEBUG.Println(STR, "store is closed") +} + +// Put will put a message into the store, associated with the provided +// key value. +func (store *FileStore) Put(key string, m packets.ControlPacket) { + store.Lock() + defer store.Unlock() + if !store.opened { + ERROR.Println(STR, "Trying to use file store, but not open") + return + } + full := fullpath(store.directory, key) + write(store.directory, key, m) + if !exists(full) { + ERROR.Println(STR, "file not created:", full) + } +} + +// Get will retrieve a message from the store, the one associated with +// the provided key value. +func (store *FileStore) Get(key string) packets.ControlPacket { + store.RLock() + defer store.RUnlock() + if !store.opened { + ERROR.Println(STR, "Trying to use file store, but not open") + return nil + } + filepath := fullpath(store.directory, key) + if !exists(filepath) { + return nil + } + mfile, oerr := os.Open(filepath) + chkerr(oerr) + msg, rerr := packets.ReadPacket(mfile) + chkerr(mfile.Close()) + + // Message was unreadable, return nil + if rerr != nil { + newpath := corruptpath(store.directory, key) + WARN.Println(STR, "corrupted file detected:", rerr.Error(), "archived at:", newpath) + os.Rename(filepath, newpath) + return nil + } + return msg +} + +// All will provide a list of all of the keys associated with messages +// currently residing in the FileStore. +func (store *FileStore) All() []string { + store.RLock() + defer store.RUnlock() + return store.all() +} + +// Del will remove the persisted message associated with the provided +// key from the FileStore. +func (store *FileStore) Del(key string) { + store.Lock() + defer store.Unlock() + store.del(key) +} + +// Reset will remove all persisted messages from the FileStore. +func (store *FileStore) Reset() { + store.Lock() + defer store.Unlock() + WARN.Println(STR, "FileStore Reset") + for _, key := range store.all() { + store.del(key) + } +} + +// lockless +func (store *FileStore) all() []string { + var err error + var keys []string + var files fileInfos + + if !store.opened { + ERROR.Println(STR, "Trying to use file store, but not open") + return nil + } + + files, err = ioutil.ReadDir(store.directory) + chkerr(err) + sort.Sort(files) + for _, f := range files { + DEBUG.Println(STR, "file in All():", f.Name()) + name := f.Name() + if name[len(name)-4:] != msgExt { + DEBUG.Println(STR, "skipping file, doesn't have right extension: ", name) + continue + } + key := name[0 : len(name)-4] // remove file extension + keys = append(keys, key) + } + return keys +} + +// lockless +func (store *FileStore) del(key string) { + if !store.opened { + ERROR.Println(STR, "Trying to use file store, but not open") + return + } + DEBUG.Println(STR, "store del filepath:", store.directory) + DEBUG.Println(STR, "store delete key:", key) + filepath := fullpath(store.directory, key) + DEBUG.Println(STR, "path of deletion:", filepath) + if !exists(filepath) { + WARN.Println(STR, "store could not delete key:", key) + return + } + rerr := os.Remove(filepath) + chkerr(rerr) + DEBUG.Println(STR, "del msg:", key) + if exists(filepath) { + ERROR.Println(STR, "file not deleted:", filepath) + } +} + +func fullpath(store string, key string) string { + p := path.Join(store, key+msgExt) + return p +} + +func tmppath(store string, key string) string { + p := path.Join(store, key+tmpExt) + return p +} + +func corruptpath(store string, key string) string { + p := path.Join(store, key+corruptExt) + return p +} + +// create file called "X.[messageid].tmp" located in the store +// the contents of the file is the bytes of the message, then +// rename it to "X.[messageid].msg", overwriting any existing +// message with the same id +// X will be 'i' for inbound messages, and O for outbound messages +func write(store, key string, m packets.ControlPacket) { + temppath := tmppath(store, key) + f, err := os.Create(temppath) + chkerr(err) + werr := m.Write(f) + chkerr(werr) + cerr := f.Close() + chkerr(cerr) + rerr := os.Rename(temppath, fullpath(store, key)) + chkerr(rerr) +} + +func exists(file string) bool { + if _, err := os.Stat(file); err != nil { + if os.IsNotExist(err) { + return false + } + chkerr(err) + } + return true +} + +type fileInfos []os.FileInfo + +func (f fileInfos) Len() int { + return len(f) +} + +func (f fileInfos) Swap(i, j int) { + f[i], f[j] = f[j], f[i] +} + +func (f fileInfos) Less(i, j int) bool { + return f[i].ModTime().Before(f[j].ModTime()) +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/memstore.go b/vendor/github.com/eclipse/paho.mqtt.golang/memstore.go new file mode 100644 index 000000000000..499c490bdbbd --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/memstore.go @@ -0,0 +1,138 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "sync" + + "github.com/eclipse/paho.mqtt.golang/packets" +) + +// MemoryStore implements the store interface to provide a "persistence" +// mechanism wholly stored in memory. This is only useful for +// as long as the client instance exists. +type MemoryStore struct { + sync.RWMutex + messages map[string]packets.ControlPacket + opened bool +} + +// NewMemoryStore returns a pointer to a new instance of +// MemoryStore, the instance is not initialized and ready to +// use until Open() has been called on it. +func NewMemoryStore() *MemoryStore { + store := &MemoryStore{ + messages: make(map[string]packets.ControlPacket), + opened: false, + } + return store +} + +// Open initializes a MemoryStore instance. +func (store *MemoryStore) Open() { + store.Lock() + defer store.Unlock() + store.opened = true + DEBUG.Println(STR, "memorystore initialized") +} + +// Put takes a key and a pointer to a Message and stores the +// message. +func (store *MemoryStore) Put(key string, message packets.ControlPacket) { + store.Lock() + defer store.Unlock() + if !store.opened { + ERROR.Println(STR, "Trying to use memory store, but not open") + return + } + store.messages[key] = message +} + +// Get takes a key and looks in the store for a matching Message +// returning either the Message pointer or nil. +func (store *MemoryStore) Get(key string) packets.ControlPacket { + store.RLock() + defer store.RUnlock() + if !store.opened { + ERROR.Println(STR, "Trying to use memory store, but not open") + return nil + } + mid := mIDFromKey(key) + m := store.messages[key] + if m == nil { + CRITICAL.Println(STR, "memorystore get: message", mid, "not found") + } else { + DEBUG.Println(STR, "memorystore get: message", mid, "found") + } + return m +} + +// All returns a slice of strings containing all the keys currently +// in the MemoryStore. +func (store *MemoryStore) All() []string { + store.RLock() + defer store.RUnlock() + if !store.opened { + ERROR.Println(STR, "Trying to use memory store, but not open") + return nil + } + keys := []string{} + for k := range store.messages { + keys = append(keys, k) + } + return keys +} + +// Del takes a key, searches the MemoryStore and if the key is found +// deletes the Message pointer associated with it. +func (store *MemoryStore) Del(key string) { + store.Lock() + defer store.Unlock() + if !store.opened { + ERROR.Println(STR, "Trying to use memory store, but not open") + return + } + mid := mIDFromKey(key) + m := store.messages[key] + if m == nil { + WARN.Println(STR, "memorystore del: message", mid, "not found") + } else { + delete(store.messages, key) + DEBUG.Println(STR, "memorystore del: message", mid, "was deleted") + } +} + +// Close will disallow modifications to the state of the store. +func (store *MemoryStore) Close() { + store.Lock() + defer store.Unlock() + if !store.opened { + ERROR.Println(STR, "Trying to close memory store, but not open") + return + } + store.opened = false + DEBUG.Println(STR, "memorystore closed") +} + +// Reset eliminates all persisted message data in the store. +func (store *MemoryStore) Reset() { + store.Lock() + defer store.Unlock() + if !store.opened { + ERROR.Println(STR, "Trying to reset memory store, but not open") + } + store.messages = make(map[string]packets.ControlPacket) + WARN.Println(STR, "memorystore wiped") +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/message.go b/vendor/github.com/eclipse/paho.mqtt.golang/message.go new file mode 100644 index 000000000000..903e5dcf5e70 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/message.go @@ -0,0 +1,127 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "net/url" + + "github.com/eclipse/paho.mqtt.golang/packets" + "sync" +) + +// Message defines the externals that a message implementation must support +// these are received messages that are passed to the callbacks, not internal +// messages +type Message interface { + Duplicate() bool + Qos() byte + Retained() bool + Topic() string + MessageID() uint16 + Payload() []byte + Ack() +} + +type message struct { + duplicate bool + qos byte + retained bool + topic string + messageID uint16 + payload []byte + once sync.Once + ack func() +} + +func (m *message) Duplicate() bool { + return m.duplicate +} + +func (m *message) Qos() byte { + return m.qos +} + +func (m *message) Retained() bool { + return m.retained +} + +func (m *message) Topic() string { + return m.topic +} + +func (m *message) MessageID() uint16 { + return m.messageID +} + +func (m *message) Payload() []byte { + return m.payload +} + +func (m *message) Ack() { + m.once.Do(m.ack) +} + +func messageFromPublish(p *packets.PublishPacket, ack func()) Message { + return &message{ + duplicate: p.Dup, + qos: p.Qos, + retained: p.Retain, + topic: p.TopicName, + messageID: p.MessageID, + payload: p.Payload, + ack: ack, + } +} + +func newConnectMsgFromOptions(options *ClientOptions, broker *url.URL) *packets.ConnectPacket { + m := packets.NewControlPacket(packets.Connect).(*packets.ConnectPacket) + + m.CleanSession = options.CleanSession + m.WillFlag = options.WillEnabled + m.WillRetain = options.WillRetained + m.ClientIdentifier = options.ClientID + + if options.WillEnabled { + m.WillQos = options.WillQos + m.WillTopic = options.WillTopic + m.WillMessage = options.WillPayload + } + + username := options.Username + password := options.Password + if broker.User != nil { + username = broker.User.Username() + if pwd, ok := broker.User.Password(); ok { + password = pwd + } + } + if options.CredentialsProvider != nil { + username, password = options.CredentialsProvider() + } + + if username != "" { + m.UsernameFlag = true + m.Username = username + //mustn't have password without user as well + if password != "" { + m.PasswordFlag = true + m.Password = []byte(password) + } + } + + m.Keepalive = uint16(options.KeepAlive) + + return m +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/messageids.go b/vendor/github.com/eclipse/paho.mqtt.golang/messageids.go new file mode 100644 index 000000000000..e98cc24f846f --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/messageids.go @@ -0,0 +1,141 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "fmt" + "sync" + "time" +) + +// MId is 16 bit message id as specified by the MQTT spec. +// In general, these values should not be depended upon by +// the client application. +type MId uint16 + +type messageIds struct { + sync.RWMutex + index map[uint16]tokenCompletor +} + +const ( + midMin uint16 = 1 + midMax uint16 = 65535 +) + +func (mids *messageIds) cleanUp() { + mids.Lock() + for _, token := range mids.index { + switch token.(type) { + case *PublishToken: + token.setError(fmt.Errorf("Connection lost before Publish completed")) + case *SubscribeToken: + token.setError(fmt.Errorf("Connection lost before Subscribe completed")) + case *UnsubscribeToken: + token.setError(fmt.Errorf("Connection lost before Unsubscribe completed")) + case nil: + continue + } + token.flowComplete() + } + mids.index = make(map[uint16]tokenCompletor) + mids.Unlock() + DEBUG.Println(MID, "cleaned up") +} + +func (mids *messageIds) freeID(id uint16) { + mids.Lock() + delete(mids.index, id) + mids.Unlock() +} + +func (mids *messageIds) claimID(token tokenCompletor, id uint16) { + mids.Lock() + defer mids.Unlock() + if _, ok := mids.index[id]; !ok { + mids.index[id] = token + } else { + old := mids.index[id] + old.flowComplete() + mids.index[id] = token + } +} + +func (mids *messageIds) getID(t tokenCompletor) uint16 { + mids.Lock() + defer mids.Unlock() + for i := midMin; i < midMax; i++ { + if _, ok := mids.index[i]; !ok { + mids.index[i] = t + return i + } + } + return 0 +} + +func (mids *messageIds) getToken(id uint16) tokenCompletor { + mids.RLock() + defer mids.RUnlock() + if token, ok := mids.index[id]; ok { + return token + } + return &DummyToken{id: id} +} + +type DummyToken struct { + id uint16 +} + +func (d *DummyToken) Wait() bool { + return true +} + +func (d *DummyToken) WaitTimeout(t time.Duration) bool { + return true +} + +func (d *DummyToken) flowComplete() { + ERROR.Printf("A lookup for token %d returned nil\n", d.id) +} + +func (d *DummyToken) Error() error { + return nil +} + +func (p *DummyToken) setError(e error) {} + +// PlaceHolderToken does nothing and was implemented to allow a messageid to be reserved +// it differs from DummyToken in that calling flowComplete does not generate an error (it +// is expected that flowComplete will be called when the token is overwritten with a real token) +type PlaceHolderToken struct { + id uint16 +} + +func (p *PlaceHolderToken) Wait() bool { + return true +} + +func (p *PlaceHolderToken) WaitTimeout(t time.Duration) bool { + return true +} + +func (p *PlaceHolderToken) flowComplete() { +} + +func (p *PlaceHolderToken) Error() error { + return nil +} + +func (p *PlaceHolderToken) setError(e error) {} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/net.go b/vendor/github.com/eclipse/paho.mqtt.golang/net.go new file mode 100644 index 000000000000..804cb3fc485f --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/net.go @@ -0,0 +1,330 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "crypto/tls" + "errors" + "net" + "net/http" + "net/url" + "os" + "reflect" + "sync/atomic" + "time" + + "github.com/eclipse/paho.mqtt.golang/packets" + "golang.org/x/net/proxy" +) + +func signalError(c chan<- error, err error) { + select { + case c <- err: + default: + } +} + +func openConnection(uri *url.URL, tlsc *tls.Config, timeout time.Duration, headers http.Header) (net.Conn, error) { + switch uri.Scheme { + case "ws": + conn, err := NewWebsocket(uri.String(), nil, timeout, headers) + return conn, err + case "wss": + conn, err := NewWebsocket(uri.String(), tlsc, timeout, headers) + return conn, err + case "tcp": + allProxy := os.Getenv("all_proxy") + if len(allProxy) == 0 { + conn, err := net.DialTimeout("tcp", uri.Host, timeout) + if err != nil { + return nil, err + } + return conn, nil + } + proxyDialer := proxy.FromEnvironment() + + conn, err := proxyDialer.Dial("tcp", uri.Host) + if err != nil { + return nil, err + } + return conn, nil + case "unix": + conn, err := net.DialTimeout("unix", uri.Host, timeout) + if err != nil { + return nil, err + } + return conn, nil + case "ssl": + fallthrough + case "tls": + fallthrough + case "tcps": + allProxy := os.Getenv("all_proxy") + if len(allProxy) == 0 { + conn, err := tls.DialWithDialer(&net.Dialer{Timeout: timeout}, "tcp", uri.Host, tlsc) + if err != nil { + return nil, err + } + return conn, nil + } + proxyDialer := proxy.FromEnvironment() + + conn, err := proxyDialer.Dial("tcp", uri.Host) + if err != nil { + return nil, err + } + + tlsConn := tls.Client(conn, tlsc) + + err = tlsConn.Handshake() + if err != nil { + conn.Close() + return nil, err + } + + return tlsConn, nil + } + return nil, errors.New("Unknown protocol") +} + +// actually read incoming messages off the wire +// send Message object into ibound channel +func incoming(c *client) { + var err error + var cp packets.ControlPacket + + defer c.workers.Done() + + DEBUG.Println(NET, "incoming started") + + for { + if cp, err = packets.ReadPacket(c.conn); err != nil { + break + } + DEBUG.Println(NET, "Received Message") + select { + case c.ibound <- cp: + // Notify keepalive logic that we recently received a packet + if c.options.KeepAlive != 0 { + c.lastReceived.Store(time.Now()) + } + case <-c.stop: + // This avoids a deadlock should a message arrive while shutting down. + // In that case the "reader" of c.ibound might already be gone + WARN.Println(NET, "incoming dropped a received message during shutdown") + break + } + } + // We received an error on read. + // If disconnect is in progress, swallow error and return + select { + case <-c.stop: + DEBUG.Println(NET, "incoming stopped") + return + // Not trying to disconnect, send the error to the errors channel + default: + ERROR.Println(NET, "incoming stopped with error", err) + signalError(c.errors, err) + return + } +} + +// receive a Message object on obound, and then +// actually send outgoing message to the wire +func outgoing(c *client) { + defer c.workers.Done() + DEBUG.Println(NET, "outgoing started") + + for { + DEBUG.Println(NET, "outgoing waiting for an outbound message") + select { + case <-c.stop: + DEBUG.Println(NET, "outgoing stopped") + return + case pub := <-c.obound: + msg := pub.p.(*packets.PublishPacket) + + if c.options.WriteTimeout > 0 { + c.conn.SetWriteDeadline(time.Now().Add(c.options.WriteTimeout)) + } + + if err := msg.Write(c.conn); err != nil { + ERROR.Println(NET, "outgoing stopped with error", err) + pub.t.setError(err) + signalError(c.errors, err) + return + } + + if c.options.WriteTimeout > 0 { + // If we successfully wrote, we don't want the timeout to happen during an idle period + // so we reset it to infinite. + c.conn.SetWriteDeadline(time.Time{}) + } + + if msg.Qos == 0 { + pub.t.flowComplete() + } + DEBUG.Println(NET, "obound wrote msg, id:", msg.MessageID) + case msg := <-c.oboundP: + DEBUG.Println(NET, "obound priority msg to write, type", reflect.TypeOf(msg.p)) + if err := msg.p.Write(c.conn); err != nil { + ERROR.Println(NET, "outgoing stopped with error", err) + if msg.t != nil { + msg.t.setError(err) + } + signalError(c.errors, err) + return + } + switch msg.p.(type) { + case *packets.DisconnectPacket: + msg.t.(*DisconnectToken).flowComplete() + DEBUG.Println(NET, "outbound wrote disconnect, stopping") + return + } + } + // Reset ping timer after sending control packet. + if c.options.KeepAlive != 0 { + c.lastSent.Store(time.Now()) + } + } +} + +// receive Message objects on ibound +// store messages if necessary +// send replies on obound +// delete messages from store if necessary +func alllogic(c *client) { + defer c.workers.Done() + DEBUG.Println(NET, "logic started") + + for { + DEBUG.Println(NET, "logic waiting for msg on ibound") + + select { + case msg := <-c.ibound: + DEBUG.Println(NET, "logic got msg on ibound") + persistInbound(c.persist, msg) + switch m := msg.(type) { + case *packets.PingrespPacket: + DEBUG.Println(NET, "received pingresp") + atomic.StoreInt32(&c.pingOutstanding, 0) + case *packets.SubackPacket: + DEBUG.Println(NET, "received suback, id:", m.MessageID) + token := c.getToken(m.MessageID) + switch t := token.(type) { + case *SubscribeToken: + DEBUG.Println(NET, "granted qoss", m.ReturnCodes) + for i, qos := range m.ReturnCodes { + t.subResult[t.subs[i]] = qos + } + } + token.flowComplete() + c.freeID(m.MessageID) + case *packets.UnsubackPacket: + DEBUG.Println(NET, "received unsuback, id:", m.MessageID) + c.getToken(m.MessageID).flowComplete() + c.freeID(m.MessageID) + case *packets.PublishPacket: + DEBUG.Println(NET, "received publish, msgId:", m.MessageID) + DEBUG.Println(NET, "putting msg on onPubChan") + switch m.Qos { + case 2: + c.incomingPubChan <- m + DEBUG.Println(NET, "done putting msg on incomingPubChan") + case 1: + c.incomingPubChan <- m + DEBUG.Println(NET, "done putting msg on incomingPubChan") + case 0: + select { + case c.incomingPubChan <- m: + case <-c.stop: + } + DEBUG.Println(NET, "done putting msg on incomingPubChan") + } + case *packets.PubackPacket: + DEBUG.Println(NET, "received puback, id:", m.MessageID) + // c.receipts.get(msg.MsgId()) <- Receipt{} + // c.receipts.end(msg.MsgId()) + c.getToken(m.MessageID).flowComplete() + c.freeID(m.MessageID) + case *packets.PubrecPacket: + DEBUG.Println(NET, "received pubrec, id:", m.MessageID) + prel := packets.NewControlPacket(packets.Pubrel).(*packets.PubrelPacket) + prel.MessageID = m.MessageID + select { + case c.oboundP <- &PacketAndToken{p: prel, t: nil}: + case <-c.stop: + } + case *packets.PubrelPacket: + DEBUG.Println(NET, "received pubrel, id:", m.MessageID) + pc := packets.NewControlPacket(packets.Pubcomp).(*packets.PubcompPacket) + pc.MessageID = m.MessageID + persistOutbound(c.persist, pc) + select { + case c.oboundP <- &PacketAndToken{p: pc, t: nil}: + case <-c.stop: + } + case *packets.PubcompPacket: + DEBUG.Println(NET, "received pubcomp, id:", m.MessageID) + c.getToken(m.MessageID).flowComplete() + c.freeID(m.MessageID) + } + case <-c.stop: + WARN.Println(NET, "logic stopped") + return + } + } +} + +func (c *client) ackFunc(packet *packets.PublishPacket) func() { + return func() { + switch packet.Qos { + case 2: + pr := packets.NewControlPacket(packets.Pubrec).(*packets.PubrecPacket) + pr.MessageID = packet.MessageID + DEBUG.Println(NET, "putting pubrec msg on obound") + select { + case c.oboundP <- &PacketAndToken{p: pr, t: nil}: + case <-c.stop: + } + DEBUG.Println(NET, "done putting pubrec msg on obound") + case 1: + pa := packets.NewControlPacket(packets.Puback).(*packets.PubackPacket) + pa.MessageID = packet.MessageID + DEBUG.Println(NET, "putting puback msg on obound") + persistOutbound(c.persist, pa) + select { + case c.oboundP <- &PacketAndToken{p: pa, t: nil}: + case <-c.stop: + } + DEBUG.Println(NET, "done putting puback msg on obound") + case 0: + // do nothing, since there is no need to send an ack packet back + } + } +} + +func errorWatch(c *client) { + defer c.workers.Done() + select { + case <-c.stop: + WARN.Println(NET, "errorWatch stopped") + return + case err := <-c.errors: + ERROR.Println(NET, "error triggered, stopping") + go c.internalConnLost(err) + return + } +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/notice.html b/vendor/github.com/eclipse/paho.mqtt.golang/notice.html new file mode 100644 index 000000000000..f19c483b9c83 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/notice.html @@ -0,0 +1,108 @@ + + + + + +Eclipse Foundation Software User Agreement + + + +

Eclipse Foundation Software User Agreement

+

February 1, 2011

+ +

Usage Of Content

+ +

THE ECLIPSE FOUNDATION MAKES AVAILABLE SOFTWARE, DOCUMENTATION, INFORMATION AND/OR OTHER MATERIALS FOR OPEN SOURCE PROJECTS + (COLLECTIVELY "CONTENT"). USE OF THE CONTENT IS GOVERNED BY THE TERMS AND CONDITIONS OF THIS AGREEMENT AND/OR THE TERMS AND + CONDITIONS OF LICENSE AGREEMENTS OR NOTICES INDICATED OR REFERENCED BELOW. BY USING THE CONTENT, YOU AGREE THAT YOUR USE + OF THE CONTENT IS GOVERNED BY THIS AGREEMENT AND/OR THE TERMS AND CONDITIONS OF ANY APPLICABLE LICENSE AGREEMENTS OR + NOTICES INDICATED OR REFERENCED BELOW. IF YOU DO NOT AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT AND THE TERMS AND + CONDITIONS OF ANY APPLICABLE LICENSE AGREEMENTS OR NOTICES INDICATED OR REFERENCED BELOW, THEN YOU MAY NOT USE THE CONTENT.

+ +

Applicable Licenses

+ +

Unless otherwise indicated, all Content made available by the Eclipse Foundation is provided to you under the terms and conditions of the Eclipse Public License Version 1.0 + ("EPL"). A copy of the EPL is provided with this Content and is also available at http://www.eclipse.org/legal/epl-v10.html. + For purposes of the EPL, "Program" will mean the Content.

+ +

Content includes, but is not limited to, source code, object code, documentation and other files maintained in the Eclipse Foundation source code + repository ("Repository") in software modules ("Modules") and made available as downloadable archives ("Downloads").

+ +
    +
  • Content may be structured and packaged into modules to facilitate delivering, extending, and upgrading the Content. Typical modules may include plug-ins ("Plug-ins"), plug-in fragments ("Fragments"), and features ("Features").
  • +
  • Each Plug-in or Fragment may be packaged as a sub-directory or JAR (Java™ ARchive) in a directory named "plugins".
  • +
  • A Feature is a bundle of one or more Plug-ins and/or Fragments and associated material. Each Feature may be packaged as a sub-directory in a directory named "features". Within a Feature, files named "feature.xml" may contain a list of the names and version numbers of the Plug-ins + and/or Fragments associated with that Feature.
  • +
  • Features may also include other Features ("Included Features"). Within a Feature, files named "feature.xml" may contain a list of the names and version numbers of Included Features.
  • +
+ +

The terms and conditions governing Plug-ins and Fragments should be contained in files named "about.html" ("Abouts"). The terms and conditions governing Features and +Included Features should be contained in files named "license.html" ("Feature Licenses"). Abouts and Feature Licenses may be located in any directory of a Download or Module +including, but not limited to the following locations:

+ +
    +
  • The top-level (root) directory
  • +
  • Plug-in and Fragment directories
  • +
  • Inside Plug-ins and Fragments packaged as JARs
  • +
  • Sub-directories of the directory named "src" of certain Plug-ins
  • +
  • Feature directories
  • +
+ +

Note: if a Feature made available by the Eclipse Foundation is installed using the Provisioning Technology (as defined below), you must agree to a license ("Feature Update License") during the +installation process. If the Feature contains Included Features, the Feature Update License should either provide you with the terms and conditions governing the Included Features or +inform you where you can locate them. Feature Update Licenses may be found in the "license" property of files named "feature.properties" found within a Feature. +Such Abouts, Feature Licenses, and Feature Update Licenses contain the terms and conditions (or references to such terms and conditions) that govern your use of the associated Content in +that directory.

+ +

THE ABOUTS, FEATURE LICENSES, AND FEATURE UPDATE LICENSES MAY REFER TO THE EPL OR OTHER LICENSE AGREEMENTS, NOTICES OR TERMS AND CONDITIONS. SOME OF THESE +OTHER LICENSE AGREEMENTS MAY INCLUDE (BUT ARE NOT LIMITED TO):

+ + + +

IT IS YOUR OBLIGATION TO READ AND ACCEPT ALL SUCH TERMS AND CONDITIONS PRIOR TO USE OF THE CONTENT. If no About, Feature License, or Feature Update License is provided, please +contact the Eclipse Foundation to determine what terms and conditions govern that particular Content.

+ + +

Use of Provisioning Technology

+ +

The Eclipse Foundation makes available provisioning software, examples of which include, but are not limited to, p2 and the Eclipse + Update Manager ("Provisioning Technology") for the purpose of allowing users to install software, documentation, information and/or + other materials (collectively "Installable Software"). This capability is provided with the intent of allowing such users to + install, extend and update Eclipse-based products. Information about packaging Installable Software is available at http://eclipse.org/equinox/p2/repository_packaging.html + ("Specification").

+ +

You may use Provisioning Technology to allow other parties to install Installable Software. You shall be responsible for enabling the + applicable license agreements relating to the Installable Software to be presented to, and accepted by, the users of the Provisioning Technology + in accordance with the Specification. By using Provisioning Technology in such a manner and making it available in accordance with the + Specification, you further acknowledge your agreement to, and the acquisition of all necessary rights to permit the following:

+ +
    +
  1. A series of actions may occur ("Provisioning Process") in which a user may execute the Provisioning Technology + on a machine ("Target Machine") with the intent of installing, extending or updating the functionality of an Eclipse-based + product.
  2. +
  3. During the Provisioning Process, the Provisioning Technology may cause third party Installable Software or a portion thereof to be + accessed and copied to the Target Machine.
  4. +
  5. Pursuant to the Specification, you will provide to the user the terms and conditions that govern the use of the Installable + Software ("Installable Software Agreement") and such Installable Software Agreement shall be accessed from the Target + Machine in accordance with the Specification. Such Installable Software Agreement must inform the user of the terms and conditions that govern + the Installable Software and must solicit acceptance by the end user in the manner prescribed in such Installable Software Agreement. Upon such + indication of agreement by the user, the provisioning Technology will complete installation of the Installable Software.
  6. +
+ +

Cryptography

+ +

Content may contain encryption software. The country in which you are currently may have restrictions on the import, possession, and use, and/or re-export to + another country, of encryption software. BEFORE using any encryption software, please check the country's laws, regulations and policies concerning the import, + possession, or use, and re-export of encryption software, to see if this is permitted.

+ +

Java and all Java-based trademarks are trademarks of Oracle Corporation in the United States, other countries, or both.

+ + diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/oops.go b/vendor/github.com/eclipse/paho.mqtt.golang/oops.go new file mode 100644 index 000000000000..39630d7f28a5 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/oops.go @@ -0,0 +1,21 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +func chkerr(e error) { + if e != nil { + panic(e) + } +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/options.go b/vendor/github.com/eclipse/paho.mqtt.golang/options.go new file mode 100644 index 000000000000..4a391192d43b --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/options.go @@ -0,0 +1,374 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + * MÃ¥ns Ansgariusson + */ + +// Portions copyright © 2018 TIBCO Software Inc. + +package mqtt + +import ( + "crypto/tls" + "net/http" + "net/url" + "regexp" + "strings" + "time" +) + +// CredentialsProvider allows the username and password to be updated +// before reconnecting. It should return the current username and password. +type CredentialsProvider func() (username string, password string) + +// MessageHandler is a callback type which can be set to be +// executed upon the arrival of messages published to topics +// to which the client is subscribed. +type MessageHandler func(Client, Message) + +// ConnectionLostHandler is a callback type which can be set to be +// executed upon an unintended disconnection from the MQTT broker. +// Disconnects caused by calling Disconnect or ForceDisconnect will +// not cause an OnConnectionLost callback to execute. +type ConnectionLostHandler func(Client, error) + +// OnConnectHandler is a callback that is called when the client +// state changes from unconnected/disconnected to connected. Both +// at initial connection and on reconnection +type OnConnectHandler func(Client) + +// ReconnectHandler is invoked prior to reconnecting after +// the initial connection is lost +type ReconnectHandler func(Client, *ClientOptions) + +// ClientOptions contains configurable options for an Client. +type ClientOptions struct { + Servers []*url.URL + ClientID string + Username string + Password string + CredentialsProvider CredentialsProvider + CleanSession bool + Order bool + WillEnabled bool + WillTopic string + WillPayload []byte + WillQos byte + WillRetained bool + ProtocolVersion uint + protocolVersionExplicit bool + TLSConfig *tls.Config + KeepAlive int64 + PingTimeout time.Duration + ConnectTimeout time.Duration + MaxReconnectInterval time.Duration + AutoReconnect bool + ConnectRetryInterval time.Duration + ConnectRetry bool + Store Store + DefaultPublishHandler MessageHandler + OnConnect OnConnectHandler + OnConnectionLost ConnectionLostHandler + OnReconnecting ReconnectHandler + WriteTimeout time.Duration + MessageChannelDepth uint + ResumeSubs bool + HTTPHeaders http.Header +} + +// NewClientOptions will create a new ClientClientOptions type with some +// default values. +// Port: 1883 +// CleanSession: True +// Order: True +// KeepAlive: 30 (seconds) +// ConnectTimeout: 30 (seconds) +// MaxReconnectInterval 10 (minutes) +// AutoReconnect: True +func NewClientOptions() *ClientOptions { + o := &ClientOptions{ + Servers: nil, + ClientID: "", + Username: "", + Password: "", + CleanSession: true, + Order: true, + WillEnabled: false, + WillTopic: "", + WillPayload: nil, + WillQos: 0, + WillRetained: false, + ProtocolVersion: 0, + protocolVersionExplicit: false, + KeepAlive: 30, + PingTimeout: 10 * time.Second, + ConnectTimeout: 30 * time.Second, + MaxReconnectInterval: 10 * time.Minute, + AutoReconnect: true, + ConnectRetryInterval: 30 * time.Second, + ConnectRetry: false, + Store: nil, + OnConnect: nil, + OnConnectionLost: DefaultConnectionLostHandler, + WriteTimeout: 0, // 0 represents timeout disabled + ResumeSubs: false, + HTTPHeaders: make(map[string][]string), + } + return o +} + +// AddBroker adds a broker URI to the list of brokers to be used. The format should be +// scheme://host:port +// Where "scheme" is one of "tcp", "ssl", or "ws", "host" is the ip-address (or hostname) +// and "port" is the port on which the broker is accepting connections. +// +// Default values for hostname is "127.0.0.1", for schema is "tcp://". +// +// An example broker URI would look like: tcp://foobar.com:1883 +func (o *ClientOptions) AddBroker(server string) *ClientOptions { + re := regexp.MustCompile(`%(25)?`) + if len(server) > 0 && server[0] == ':' { + server = "127.0.0.1" + server + } + if !strings.Contains(server, "://") { + server = "tcp://" + server + } + server = re.ReplaceAllLiteralString(server, "%25") + brokerURI, err := url.Parse(server) + if err != nil { + ERROR.Println(CLI, "Failed to parse %q broker address: %s", server, err) + return o + } + o.Servers = append(o.Servers, brokerURI) + return o +} + +// SetResumeSubs will enable resuming of stored (un)subscribe messages when connecting +// but not reconnecting if CleanSession is false. Otherwise these messages are discarded. +func (o *ClientOptions) SetResumeSubs(resume bool) *ClientOptions { + o.ResumeSubs = resume + return o +} + +// SetClientID will set the client id to be used by this client when +// connecting to the MQTT broker. According to the MQTT v3.1 specification, +// a client id must be no longer than 23 characters. +func (o *ClientOptions) SetClientID(id string) *ClientOptions { + o.ClientID = id + return o +} + +// SetUsername will set the username to be used by this client when connecting +// to the MQTT broker. Note: without the use of SSL/TLS, this information will +// be sent in plaintext across the wire. +func (o *ClientOptions) SetUsername(u string) *ClientOptions { + o.Username = u + return o +} + +// SetPassword will set the password to be used by this client when connecting +// to the MQTT broker. Note: without the use of SSL/TLS, this information will +// be sent in plaintext across the wire. +func (o *ClientOptions) SetPassword(p string) *ClientOptions { + o.Password = p + return o +} + +// SetCredentialsProvider will set a method to be called by this client when +// connecting to the MQTT broker that provide the current username and password. +// Note: without the use of SSL/TLS, this information will be sent +// in plaintext across the wire. +func (o *ClientOptions) SetCredentialsProvider(p CredentialsProvider) *ClientOptions { + o.CredentialsProvider = p + return o +} + +// SetCleanSession will set the "clean session" flag in the connect message +// when this client connects to an MQTT broker. By setting this flag, you are +// indicating that no messages saved by the broker for this client should be +// delivered. Any messages that were going to be sent by this client before +// diconnecting previously but didn't will not be sent upon connecting to the +// broker. +func (o *ClientOptions) SetCleanSession(clean bool) *ClientOptions { + o.CleanSession = clean + return o +} + +// SetOrderMatters will set the message routing to guarantee order within +// each QoS level. By default, this value is true. If set to false, +// this flag indicates that messages can be delivered asynchronously +// from the client to the application and possibly arrive out of order. +func (o *ClientOptions) SetOrderMatters(order bool) *ClientOptions { + o.Order = order + return o +} + +// SetTLSConfig will set an SSL/TLS configuration to be used when connecting +// to an MQTT broker. Please read the official Go documentation for more +// information. +func (o *ClientOptions) SetTLSConfig(t *tls.Config) *ClientOptions { + o.TLSConfig = t + return o +} + +// SetStore will set the implementation of the Store interface +// used to provide message persistence in cases where QoS levels +// QoS_ONE or QoS_TWO are used. If no store is provided, then the +// client will use MemoryStore by default. +func (o *ClientOptions) SetStore(s Store) *ClientOptions { + o.Store = s + return o +} + +// SetKeepAlive will set the amount of time (in seconds) that the client +// should wait before sending a PING request to the broker. This will +// allow the client to know that a connection has not been lost with the +// server. +func (o *ClientOptions) SetKeepAlive(k time.Duration) *ClientOptions { + o.KeepAlive = int64(k / time.Second) + return o +} + +// SetPingTimeout will set the amount of time (in seconds) that the client +// will wait after sending a PING request to the broker, before deciding +// that the connection has been lost. Default is 10 seconds. +func (o *ClientOptions) SetPingTimeout(k time.Duration) *ClientOptions { + o.PingTimeout = k + return o +} + +// SetProtocolVersion sets the MQTT version to be used to connect to the +// broker. Legitimate values are currently 3 - MQTT 3.1 or 4 - MQTT 3.1.1 +func (o *ClientOptions) SetProtocolVersion(pv uint) *ClientOptions { + if (pv >= 3 && pv <= 4) || (pv > 0x80) { + o.ProtocolVersion = pv + o.protocolVersionExplicit = true + } + return o +} + +// UnsetWill will cause any set will message to be disregarded. +func (o *ClientOptions) UnsetWill() *ClientOptions { + o.WillEnabled = false + return o +} + +// SetWill accepts a string will message to be set. When the client connects, +// it will give this will message to the broker, which will then publish the +// provided payload (the will) to any clients that are subscribed to the provided +// topic. +func (o *ClientOptions) SetWill(topic string, payload string, qos byte, retained bool) *ClientOptions { + o.SetBinaryWill(topic, []byte(payload), qos, retained) + return o +} + +// SetBinaryWill accepts a []byte will message to be set. When the client connects, +// it will give this will message to the broker, which will then publish the +// provided payload (the will) to any clients that are subscribed to the provided +// topic. +func (o *ClientOptions) SetBinaryWill(topic string, payload []byte, qos byte, retained bool) *ClientOptions { + o.WillEnabled = true + o.WillTopic = topic + o.WillPayload = payload + o.WillQos = qos + o.WillRetained = retained + return o +} + +// SetDefaultPublishHandler sets the MessageHandler that will be called when a message +// is received that does not match any known subscriptions. +func (o *ClientOptions) SetDefaultPublishHandler(defaultHandler MessageHandler) *ClientOptions { + o.DefaultPublishHandler = defaultHandler + return o +} + +// SetOnConnectHandler sets the function to be called when the client is connected. Both +// at initial connection time and upon automatic reconnect. +func (o *ClientOptions) SetOnConnectHandler(onConn OnConnectHandler) *ClientOptions { + o.OnConnect = onConn + return o +} + +// SetConnectionLostHandler will set the OnConnectionLost callback to be executed +// in the case where the client unexpectedly loses connection with the MQTT broker. +func (o *ClientOptions) SetConnectionLostHandler(onLost ConnectionLostHandler) *ClientOptions { + o.OnConnectionLost = onLost + return o +} + +// SetReconnectingHandler sets the OnReconnecting callback to be executed prior +// to the client attempting a reconnect to the MQTT broker. +func (o *ClientOptions) SetReconnectingHandler(cb ReconnectHandler) *ClientOptions { + o.OnReconnecting = cb + return o +} + +// SetWriteTimeout puts a limit on how long a mqtt publish should block until it unblocks with a +// timeout error. A duration of 0 never times out. Default 30 seconds +func (o *ClientOptions) SetWriteTimeout(t time.Duration) *ClientOptions { + o.WriteTimeout = t + return o +} + +// SetConnectTimeout limits how long the client will wait when trying to open a connection +// to an MQTT server before timing out and erroring the attempt. A duration of 0 never times out. +// Default 30 seconds. Currently only operational on TCP/TLS connections. +func (o *ClientOptions) SetConnectTimeout(t time.Duration) *ClientOptions { + o.ConnectTimeout = t + return o +} + +// SetMaxReconnectInterval sets the maximum time that will be waited between reconnection attempts +// when connection is lost +func (o *ClientOptions) SetMaxReconnectInterval(t time.Duration) *ClientOptions { + o.MaxReconnectInterval = t + return o +} + +// SetAutoReconnect sets whether the automatic reconnection logic should be used +// when the connection is lost, even if disabled the ConnectionLostHandler is still +// called +func (o *ClientOptions) SetAutoReconnect(a bool) *ClientOptions { + o.AutoReconnect = a + return o +} + +// SetConnectRetryInterval sets the time that will be waited between connection attempts +// when initially connecting if ConnectRetry is TRUE +func (o *ClientOptions) SetConnectRetryInterval(t time.Duration) *ClientOptions { + o.ConnectRetryInterval = t + return o +} + +// SetConnectRetry sets whether the connect function will automatically retry the connection +// in the event of a failure (when true the token returned by the Connect function will +// not complete until the connection is up or it is cancelled) +// If ConnectRetry is true then subscriptions should be requested in OnConnect handler +// Setting this to TRUE permits mesages to be published before the connection is established +func (o *ClientOptions) SetConnectRetry(a bool) *ClientOptions { + o.ConnectRetry = a + return o +} + +// SetMessageChannelDepth DEPRECATED The value set here no longer has any effect, this function +// remains so the API is not altered. +func (o *ClientOptions) SetMessageChannelDepth(s uint) *ClientOptions { + o.MessageChannelDepth = s + return o +} + +// SetHTTPHeaders sets the additional HTTP headers that will be sent in the WebSocket +// opening handshake. +func (o *ClientOptions) SetHTTPHeaders(h http.Header) *ClientOptions { + o.HTTPHeaders = h + return o +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/options_reader.go b/vendor/github.com/eclipse/paho.mqtt.golang/options_reader.go new file mode 100644 index 000000000000..0f252e883956 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/options_reader.go @@ -0,0 +1,161 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "crypto/tls" + "net/http" + "net/url" + "time" +) + +// ClientOptionsReader provides an interface for reading ClientOptions after the client has been initialized. +type ClientOptionsReader struct { + options *ClientOptions +} + +//Servers returns a slice of the servers defined in the clientoptions +func (r *ClientOptionsReader) Servers() []*url.URL { + s := make([]*url.URL, len(r.options.Servers)) + + for i, u := range r.options.Servers { + nu := *u + s[i] = &nu + } + + return s +} + +//ResumeSubs returns true if resuming stored (un)sub is enabled +func (r *ClientOptionsReader) ResumeSubs() bool { + s := r.options.ResumeSubs + return s +} + +//ClientID returns the set client id +func (r *ClientOptionsReader) ClientID() string { + s := r.options.ClientID + return s +} + +//Username returns the set username +func (r *ClientOptionsReader) Username() string { + s := r.options.Username + return s +} + +//Password returns the set password +func (r *ClientOptionsReader) Password() string { + s := r.options.Password + return s +} + +//CleanSession returns whether Cleansession is set +func (r *ClientOptionsReader) CleanSession() bool { + s := r.options.CleanSession + return s +} + +func (r *ClientOptionsReader) Order() bool { + s := r.options.Order + return s +} + +func (r *ClientOptionsReader) WillEnabled() bool { + s := r.options.WillEnabled + return s +} + +func (r *ClientOptionsReader) WillTopic() string { + s := r.options.WillTopic + return s +} + +func (r *ClientOptionsReader) WillPayload() []byte { + s := r.options.WillPayload + return s +} + +func (r *ClientOptionsReader) WillQos() byte { + s := r.options.WillQos + return s +} + +func (r *ClientOptionsReader) WillRetained() bool { + s := r.options.WillRetained + return s +} + +func (r *ClientOptionsReader) ProtocolVersion() uint { + s := r.options.ProtocolVersion + return s +} + +func (r *ClientOptionsReader) TLSConfig() *tls.Config { + s := r.options.TLSConfig + return s +} + +func (r *ClientOptionsReader) KeepAlive() time.Duration { + s := time.Duration(r.options.KeepAlive * int64(time.Second)) + return s +} + +func (r *ClientOptionsReader) PingTimeout() time.Duration { + s := r.options.PingTimeout + return s +} + +func (r *ClientOptionsReader) ConnectTimeout() time.Duration { + s := r.options.ConnectTimeout + return s +} + +func (r *ClientOptionsReader) MaxReconnectInterval() time.Duration { + s := r.options.MaxReconnectInterval + return s +} + +func (r *ClientOptionsReader) AutoReconnect() bool { + s := r.options.AutoReconnect + return s +} + +//ConnectRetryInterval returns the delay between retries on the initial connection (if ConnectRetry true) +func (r *ClientOptionsReader) ConnectRetryInterval() time.Duration { + s := r.options.ConnectRetryInterval + return s +} + +//ConnectRetry returns whether the initial connection request will be retried until connection established +func (r *ClientOptionsReader) ConnectRetry() bool { + s := r.options.ConnectRetry + return s +} + +func (r *ClientOptionsReader) WriteTimeout() time.Duration { + s := r.options.WriteTimeout + return s +} + +func (r *ClientOptionsReader) MessageChannelDepth() uint { + s := r.options.MessageChannelDepth + return s +} + +func (r *ClientOptionsReader) HTTPHeaders() http.Header { + h := r.options.HTTPHeaders + return h +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/connack.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/connack.go new file mode 100644 index 000000000000..de58c81e9ee1 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/connack.go @@ -0,0 +1,52 @@ +package packets + +import ( + "bytes" + "fmt" + "io" +) + +//ConnackPacket is an internal representation of the fields of the +//Connack MQTT packet +type ConnackPacket struct { + FixedHeader + SessionPresent bool + ReturnCode byte +} + +func (ca *ConnackPacket) String() string { + return fmt.Sprintf("%s sessionpresent: %t returncode: %d", ca.FixedHeader, ca.SessionPresent, ca.ReturnCode) +} + +func (ca *ConnackPacket) Write(w io.Writer) error { + var body bytes.Buffer + var err error + + body.WriteByte(boolToByte(ca.SessionPresent)) + body.WriteByte(ca.ReturnCode) + ca.FixedHeader.RemainingLength = 2 + packet := ca.FixedHeader.pack() + packet.Write(body.Bytes()) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (ca *ConnackPacket) Unpack(b io.Reader) error { + flags, err := decodeByte(b) + if err != nil { + return err + } + ca.SessionPresent = 1&flags > 0 + ca.ReturnCode, err = decodeByte(b) + + return err +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (ca *ConnackPacket) Details() Details { + return Details{Qos: 0, MessageID: 0} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/connect.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/connect.go new file mode 100644 index 000000000000..da0fc05226e0 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/connect.go @@ -0,0 +1,151 @@ +package packets + +import ( + "bytes" + "fmt" + "io" +) + +//ConnectPacket is an internal representation of the fields of the +//Connect MQTT packet +type ConnectPacket struct { + FixedHeader + ProtocolName string + ProtocolVersion byte + CleanSession bool + WillFlag bool + WillQos byte + WillRetain bool + UsernameFlag bool + PasswordFlag bool + ReservedBit byte + Keepalive uint16 + + ClientIdentifier string + WillTopic string + WillMessage []byte + Username string + Password []byte +} + +func (c *ConnectPacket) String() string { + return fmt.Sprintf("%s protocolversion: %d protocolname: %s cleansession: %t willflag: %t WillQos: %d WillRetain: %t Usernameflag: %t Passwordflag: %t keepalive: %d clientId: %s willtopic: %s willmessage: %s Username: %s Password: %s", c.FixedHeader, c.ProtocolVersion, c.ProtocolName, c.CleanSession, c.WillFlag, c.WillQos, c.WillRetain, c.UsernameFlag, c.PasswordFlag, c.Keepalive, c.ClientIdentifier, c.WillTopic, c.WillMessage, c.Username, c.Password) +} + +func (c *ConnectPacket) Write(w io.Writer) error { + var body bytes.Buffer + var err error + + body.Write(encodeString(c.ProtocolName)) + body.WriteByte(c.ProtocolVersion) + body.WriteByte(boolToByte(c.CleanSession)<<1 | boolToByte(c.WillFlag)<<2 | c.WillQos<<3 | boolToByte(c.WillRetain)<<5 | boolToByte(c.PasswordFlag)<<6 | boolToByte(c.UsernameFlag)<<7) + body.Write(encodeUint16(c.Keepalive)) + body.Write(encodeString(c.ClientIdentifier)) + if c.WillFlag { + body.Write(encodeString(c.WillTopic)) + body.Write(encodeBytes(c.WillMessage)) + } + if c.UsernameFlag { + body.Write(encodeString(c.Username)) + } + if c.PasswordFlag { + body.Write(encodeBytes(c.Password)) + } + c.FixedHeader.RemainingLength = body.Len() + packet := c.FixedHeader.pack() + packet.Write(body.Bytes()) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (c *ConnectPacket) Unpack(b io.Reader) error { + var err error + c.ProtocolName, err = decodeString(b) + if err != nil { + return err + } + c.ProtocolVersion, err = decodeByte(b) + if err != nil { + return err + } + options, err := decodeByte(b) + if err != nil { + return err + } + c.ReservedBit = 1 & options + c.CleanSession = 1&(options>>1) > 0 + c.WillFlag = 1&(options>>2) > 0 + c.WillQos = 3 & (options >> 3) + c.WillRetain = 1&(options>>5) > 0 + c.PasswordFlag = 1&(options>>6) > 0 + c.UsernameFlag = 1&(options>>7) > 0 + c.Keepalive, err = decodeUint16(b) + if err != nil { + return err + } + c.ClientIdentifier, err = decodeString(b) + if err != nil { + return err + } + if c.WillFlag { + c.WillTopic, err = decodeString(b) + if err != nil { + return err + } + c.WillMessage, err = decodeBytes(b) + if err != nil { + return err + } + } + if c.UsernameFlag { + c.Username, err = decodeString(b) + if err != nil { + return err + } + } + if c.PasswordFlag { + c.Password, err = decodeBytes(b) + if err != nil { + return err + } + } + + return nil +} + +//Validate performs validation of the fields of a Connect packet +func (c *ConnectPacket) Validate() byte { + if c.PasswordFlag && !c.UsernameFlag { + return ErrRefusedBadUsernameOrPassword + } + if c.ReservedBit != 0 { + //Bad reserved bit + return ErrProtocolViolation + } + if (c.ProtocolName == "MQIsdp" && c.ProtocolVersion != 3) || (c.ProtocolName == "MQTT" && c.ProtocolVersion != 4) { + //Mismatched or unsupported protocol version + return ErrRefusedBadProtocolVersion + } + if c.ProtocolName != "MQIsdp" && c.ProtocolName != "MQTT" { + //Bad protocol name + return ErrProtocolViolation + } + if len(c.ClientIdentifier) > 65535 || len(c.Username) > 65535 || len(c.Password) > 65535 { + //Bad size field + return ErrProtocolViolation + } + if len(c.ClientIdentifier) == 0 && !c.CleanSession { + //Bad client identifier + return ErrRefusedIDRejected + } + return Accepted +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (c *ConnectPacket) Details() Details { + return Details{Qos: 0, MessageID: 0} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/disconnect.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/disconnect.go new file mode 100644 index 000000000000..c8d374508cf2 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/disconnect.go @@ -0,0 +1,34 @@ +package packets + +import ( + "io" +) + +//DisconnectPacket is an internal representation of the fields of the +//Disconnect MQTT packet +type DisconnectPacket struct { + FixedHeader +} + +func (d *DisconnectPacket) String() string { + return d.FixedHeader.String() +} + +func (d *DisconnectPacket) Write(w io.Writer) error { + packet := d.FixedHeader.pack() + _, err := packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (d *DisconnectPacket) Unpack(b io.Reader) error { + return nil +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (d *DisconnectPacket) Details() Details { + return Details{Qos: 0, MessageID: 0} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/packets.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/packets.go new file mode 100644 index 000000000000..42eeb46d39c9 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/packets.go @@ -0,0 +1,346 @@ +package packets + +import ( + "bytes" + "encoding/binary" + "errors" + "fmt" + "io" +) + +//ControlPacket defines the interface for structs intended to hold +//decoded MQTT packets, either from being read or before being +//written +type ControlPacket interface { + Write(io.Writer) error + Unpack(io.Reader) error + String() string + Details() Details +} + +//PacketNames maps the constants for each of the MQTT packet types +//to a string representation of their name. +var PacketNames = map[uint8]string{ + 1: "CONNECT", + 2: "CONNACK", + 3: "PUBLISH", + 4: "PUBACK", + 5: "PUBREC", + 6: "PUBREL", + 7: "PUBCOMP", + 8: "SUBSCRIBE", + 9: "SUBACK", + 10: "UNSUBSCRIBE", + 11: "UNSUBACK", + 12: "PINGREQ", + 13: "PINGRESP", + 14: "DISCONNECT", +} + +//Below are the constants assigned to each of the MQTT packet types +const ( + Connect = 1 + Connack = 2 + Publish = 3 + Puback = 4 + Pubrec = 5 + Pubrel = 6 + Pubcomp = 7 + Subscribe = 8 + Suback = 9 + Unsubscribe = 10 + Unsuback = 11 + Pingreq = 12 + Pingresp = 13 + Disconnect = 14 +) + +//Below are the const definitions for error codes returned by +//Connect() +const ( + Accepted = 0x00 + ErrRefusedBadProtocolVersion = 0x01 + ErrRefusedIDRejected = 0x02 + ErrRefusedServerUnavailable = 0x03 + ErrRefusedBadUsernameOrPassword = 0x04 + ErrRefusedNotAuthorised = 0x05 + ErrNetworkError = 0xFE + ErrProtocolViolation = 0xFF +) + +//ConnackReturnCodes is a map of the error codes constants for Connect() +//to a string representation of the error +var ConnackReturnCodes = map[uint8]string{ + 0: "Connection Accepted", + 1: "Connection Refused: Bad Protocol Version", + 2: "Connection Refused: Client Identifier Rejected", + 3: "Connection Refused: Server Unavailable", + 4: "Connection Refused: Username or Password in unknown format", + 5: "Connection Refused: Not Authorised", + 254: "Connection Error", + 255: "Connection Refused: Protocol Violation", +} + +//ConnErrors is a map of the errors codes constants for Connect() +//to a Go error +var ConnErrors = map[byte]error{ + Accepted: nil, + ErrRefusedBadProtocolVersion: errors.New("Unnacceptable protocol version"), + ErrRefusedIDRejected: errors.New("Identifier rejected"), + ErrRefusedServerUnavailable: errors.New("Server Unavailable"), + ErrRefusedBadUsernameOrPassword: errors.New("Bad user name or password"), + ErrRefusedNotAuthorised: errors.New("Not Authorized"), + ErrNetworkError: errors.New("Network Error"), + ErrProtocolViolation: errors.New("Protocol Violation"), +} + +//ReadPacket takes an instance of an io.Reader (such as net.Conn) and attempts +//to read an MQTT packet from the stream. It returns a ControlPacket +//representing the decoded MQTT packet and an error. One of these returns will +//always be nil, a nil ControlPacket indicating an error occurred. +func ReadPacket(r io.Reader) (ControlPacket, error) { + var fh FixedHeader + b := make([]byte, 1) + + _, err := io.ReadFull(r, b) + if err != nil { + return nil, err + } + + err = fh.unpack(b[0], r) + if err != nil { + return nil, err + } + + cp, err := NewControlPacketWithHeader(fh) + if err != nil { + return nil, err + } + + packetBytes := make([]byte, fh.RemainingLength) + n, err := io.ReadFull(r, packetBytes) + if err != nil { + return nil, err + } + if n != fh.RemainingLength { + return nil, errors.New("Failed to read expected data") + } + + err = cp.Unpack(bytes.NewBuffer(packetBytes)) + return cp, err +} + +//NewControlPacket is used to create a new ControlPacket of the type specified +//by packetType, this is usually done by reference to the packet type constants +//defined in packets.go. The newly created ControlPacket is empty and a pointer +//is returned. +func NewControlPacket(packetType byte) ControlPacket { + switch packetType { + case Connect: + return &ConnectPacket{FixedHeader: FixedHeader{MessageType: Connect}} + case Connack: + return &ConnackPacket{FixedHeader: FixedHeader{MessageType: Connack}} + case Disconnect: + return &DisconnectPacket{FixedHeader: FixedHeader{MessageType: Disconnect}} + case Publish: + return &PublishPacket{FixedHeader: FixedHeader{MessageType: Publish}} + case Puback: + return &PubackPacket{FixedHeader: FixedHeader{MessageType: Puback}} + case Pubrec: + return &PubrecPacket{FixedHeader: FixedHeader{MessageType: Pubrec}} + case Pubrel: + return &PubrelPacket{FixedHeader: FixedHeader{MessageType: Pubrel, Qos: 1}} + case Pubcomp: + return &PubcompPacket{FixedHeader: FixedHeader{MessageType: Pubcomp}} + case Subscribe: + return &SubscribePacket{FixedHeader: FixedHeader{MessageType: Subscribe, Qos: 1}} + case Suback: + return &SubackPacket{FixedHeader: FixedHeader{MessageType: Suback}} + case Unsubscribe: + return &UnsubscribePacket{FixedHeader: FixedHeader{MessageType: Unsubscribe, Qos: 1}} + case Unsuback: + return &UnsubackPacket{FixedHeader: FixedHeader{MessageType: Unsuback}} + case Pingreq: + return &PingreqPacket{FixedHeader: FixedHeader{MessageType: Pingreq}} + case Pingresp: + return &PingrespPacket{FixedHeader: FixedHeader{MessageType: Pingresp}} + } + return nil +} + +//NewControlPacketWithHeader is used to create a new ControlPacket of the type +//specified within the FixedHeader that is passed to the function. +//The newly created ControlPacket is empty and a pointer is returned. +func NewControlPacketWithHeader(fh FixedHeader) (ControlPacket, error) { + switch fh.MessageType { + case Connect: + return &ConnectPacket{FixedHeader: fh}, nil + case Connack: + return &ConnackPacket{FixedHeader: fh}, nil + case Disconnect: + return &DisconnectPacket{FixedHeader: fh}, nil + case Publish: + return &PublishPacket{FixedHeader: fh}, nil + case Puback: + return &PubackPacket{FixedHeader: fh}, nil + case Pubrec: + return &PubrecPacket{FixedHeader: fh}, nil + case Pubrel: + return &PubrelPacket{FixedHeader: fh}, nil + case Pubcomp: + return &PubcompPacket{FixedHeader: fh}, nil + case Subscribe: + return &SubscribePacket{FixedHeader: fh}, nil + case Suback: + return &SubackPacket{FixedHeader: fh}, nil + case Unsubscribe: + return &UnsubscribePacket{FixedHeader: fh}, nil + case Unsuback: + return &UnsubackPacket{FixedHeader: fh}, nil + case Pingreq: + return &PingreqPacket{FixedHeader: fh}, nil + case Pingresp: + return &PingrespPacket{FixedHeader: fh}, nil + } + return nil, fmt.Errorf("unsupported packet type 0x%x", fh.MessageType) +} + +//Details struct returned by the Details() function called on +//ControlPackets to present details of the Qos and MessageID +//of the ControlPacket +type Details struct { + Qos byte + MessageID uint16 +} + +//FixedHeader is a struct to hold the decoded information from +//the fixed header of an MQTT ControlPacket +type FixedHeader struct { + MessageType byte + Dup bool + Qos byte + Retain bool + RemainingLength int +} + +func (fh FixedHeader) String() string { + return fmt.Sprintf("%s: dup: %t qos: %d retain: %t rLength: %d", PacketNames[fh.MessageType], fh.Dup, fh.Qos, fh.Retain, fh.RemainingLength) +} + +func boolToByte(b bool) byte { + switch b { + case true: + return 1 + default: + return 0 + } +} + +func (fh *FixedHeader) pack() bytes.Buffer { + var header bytes.Buffer + header.WriteByte(fh.MessageType<<4 | boolToByte(fh.Dup)<<3 | fh.Qos<<1 | boolToByte(fh.Retain)) + header.Write(encodeLength(fh.RemainingLength)) + return header +} + +func (fh *FixedHeader) unpack(typeAndFlags byte, r io.Reader) error { + fh.MessageType = typeAndFlags >> 4 + fh.Dup = (typeAndFlags>>3)&0x01 > 0 + fh.Qos = (typeAndFlags >> 1) & 0x03 + fh.Retain = typeAndFlags&0x01 > 0 + + var err error + fh.RemainingLength, err = decodeLength(r) + return err +} + +func decodeByte(b io.Reader) (byte, error) { + num := make([]byte, 1) + _, err := b.Read(num) + if err != nil { + return 0, err + } + + return num[0], nil +} + +func decodeUint16(b io.Reader) (uint16, error) { + num := make([]byte, 2) + _, err := b.Read(num) + if err != nil { + return 0, err + } + return binary.BigEndian.Uint16(num), nil +} + +func encodeUint16(num uint16) []byte { + bytes := make([]byte, 2) + binary.BigEndian.PutUint16(bytes, num) + return bytes +} + +func encodeString(field string) []byte { + return encodeBytes([]byte(field)) +} + +func decodeString(b io.Reader) (string, error) { + buf, err := decodeBytes(b) + return string(buf), err +} + +func decodeBytes(b io.Reader) ([]byte, error) { + fieldLength, err := decodeUint16(b) + if err != nil { + return nil, err + } + + field := make([]byte, fieldLength) + _, err = b.Read(field) + if err != nil { + return nil, err + } + + return field, nil +} + +func encodeBytes(field []byte) []byte { + fieldLength := make([]byte, 2) + binary.BigEndian.PutUint16(fieldLength, uint16(len(field))) + return append(fieldLength, field...) +} + +func encodeLength(length int) []byte { + var encLength []byte + for { + digit := byte(length % 128) + length /= 128 + if length > 0 { + digit |= 0x80 + } + encLength = append(encLength, digit) + if length == 0 { + break + } + } + return encLength +} + +func decodeLength(r io.Reader) (int, error) { + var rLength uint32 + var multiplier uint32 + b := make([]byte, 1) + for multiplier < 27 { //fix: Infinite '(digit & 128) == 1' will cause the dead loop + _, err := io.ReadFull(r, b) + if err != nil { + return 0, err + } + + digit := b[0] + rLength |= uint32(digit&127) << multiplier + if (digit & 128) == 0 { + break + } + multiplier += 7 + } + return int(rLength), nil +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/pingreq.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pingreq.go new file mode 100644 index 000000000000..27e49fedaddc --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pingreq.go @@ -0,0 +1,34 @@ +package packets + +import ( + "io" +) + +//PingreqPacket is an internal representation of the fields of the +//Pingreq MQTT packet +type PingreqPacket struct { + FixedHeader +} + +func (pr *PingreqPacket) String() string { + return pr.FixedHeader.String() +} + +func (pr *PingreqPacket) Write(w io.Writer) error { + packet := pr.FixedHeader.pack() + _, err := packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (pr *PingreqPacket) Unpack(b io.Reader) error { + return nil +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (pr *PingreqPacket) Details() Details { + return Details{Qos: 0, MessageID: 0} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/pingresp.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pingresp.go new file mode 100644 index 000000000000..86f6d926123f --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pingresp.go @@ -0,0 +1,34 @@ +package packets + +import ( + "io" +) + +//PingrespPacket is an internal representation of the fields of the +//Pingresp MQTT packet +type PingrespPacket struct { + FixedHeader +} + +func (pr *PingrespPacket) String() string { + return pr.FixedHeader.String() +} + +func (pr *PingrespPacket) Write(w io.Writer) error { + packet := pr.FixedHeader.pack() + _, err := packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (pr *PingrespPacket) Unpack(b io.Reader) error { + return nil +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (pr *PingrespPacket) Details() Details { + return Details{Qos: 0, MessageID: 0} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/puback.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/puback.go new file mode 100644 index 000000000000..4f2ee0bbd1c2 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/puback.go @@ -0,0 +1,42 @@ +package packets + +import ( + "fmt" + "io" +) + +//PubackPacket is an internal representation of the fields of the +//Puback MQTT packet +type PubackPacket struct { + FixedHeader + MessageID uint16 +} + +func (pa *PubackPacket) String() string { + return fmt.Sprintf("%s MessageID: %d", pa.FixedHeader, pa.MessageID) +} + +func (pa *PubackPacket) Write(w io.Writer) error { + var err error + pa.FixedHeader.RemainingLength = 2 + packet := pa.FixedHeader.pack() + packet.Write(encodeUint16(pa.MessageID)) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (pa *PubackPacket) Unpack(b io.Reader) error { + var err error + pa.MessageID, err = decodeUint16(b) + + return err +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (pa *PubackPacket) Details() Details { + return Details{Qos: pa.Qos, MessageID: pa.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubcomp.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubcomp.go new file mode 100644 index 000000000000..494e2d52192e --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubcomp.go @@ -0,0 +1,42 @@ +package packets + +import ( + "fmt" + "io" +) + +//PubcompPacket is an internal representation of the fields of the +//Pubcomp MQTT packet +type PubcompPacket struct { + FixedHeader + MessageID uint16 +} + +func (pc *PubcompPacket) String() string { + return fmt.Sprintf("%s MessageID: %d", pc.FixedHeader, pc.MessageID) +} + +func (pc *PubcompPacket) Write(w io.Writer) error { + var err error + pc.FixedHeader.RemainingLength = 2 + packet := pc.FixedHeader.pack() + packet.Write(encodeUint16(pc.MessageID)) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (pc *PubcompPacket) Unpack(b io.Reader) error { + var err error + pc.MessageID, err = decodeUint16(b) + + return err +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (pc *PubcompPacket) Details() Details { + return Details{Qos: pc.Qos, MessageID: pc.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/publish.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/publish.go new file mode 100644 index 000000000000..7d425780c0ae --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/publish.go @@ -0,0 +1,83 @@ +package packets + +import ( + "bytes" + "fmt" + "io" +) + +//PublishPacket is an internal representation of the fields of the +//Publish MQTT packet +type PublishPacket struct { + FixedHeader + TopicName string + MessageID uint16 + Payload []byte +} + +func (p *PublishPacket) String() string { + return fmt.Sprintf("%s topicName: %s MessageID: %d payload: %s", p.FixedHeader, p.TopicName, p.MessageID, string(p.Payload)) +} + +func (p *PublishPacket) Write(w io.Writer) error { + var body bytes.Buffer + var err error + + body.Write(encodeString(p.TopicName)) + if p.Qos > 0 { + body.Write(encodeUint16(p.MessageID)) + } + p.FixedHeader.RemainingLength = body.Len() + len(p.Payload) + packet := p.FixedHeader.pack() + packet.Write(body.Bytes()) + packet.Write(p.Payload) + _, err = w.Write(packet.Bytes()) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (p *PublishPacket) Unpack(b io.Reader) error { + var payloadLength = p.FixedHeader.RemainingLength + var err error + p.TopicName, err = decodeString(b) + if err != nil { + return err + } + + if p.Qos > 0 { + p.MessageID, err = decodeUint16(b) + if err != nil { + return err + } + payloadLength -= len(p.TopicName) + 4 + } else { + payloadLength -= len(p.TopicName) + 2 + } + if payloadLength < 0 { + return fmt.Errorf("Error unpacking publish, payload length < 0") + } + p.Payload = make([]byte, payloadLength) + _, err = b.Read(p.Payload) + + return err +} + +//Copy creates a new PublishPacket with the same topic and payload +//but an empty fixed header, useful for when you want to deliver +//a message with different properties such as Qos but the same +//content +func (p *PublishPacket) Copy() *PublishPacket { + newP := NewControlPacket(Publish).(*PublishPacket) + newP.TopicName = p.TopicName + newP.Payload = p.Payload + + return newP +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (p *PublishPacket) Details() Details { + return Details{Qos: p.Qos, MessageID: p.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubrec.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubrec.go new file mode 100644 index 000000000000..7056089f9d6a --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubrec.go @@ -0,0 +1,42 @@ +package packets + +import ( + "fmt" + "io" +) + +//PubrecPacket is an internal representation of the fields of the +//Pubrec MQTT packet +type PubrecPacket struct { + FixedHeader + MessageID uint16 +} + +func (pr *PubrecPacket) String() string { + return fmt.Sprintf("%s MessageID: %d", pr.FixedHeader, pr.MessageID) +} + +func (pr *PubrecPacket) Write(w io.Writer) error { + var err error + pr.FixedHeader.RemainingLength = 2 + packet := pr.FixedHeader.pack() + packet.Write(encodeUint16(pr.MessageID)) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (pr *PubrecPacket) Unpack(b io.Reader) error { + var err error + pr.MessageID, err = decodeUint16(b) + + return err +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (pr *PubrecPacket) Details() Details { + return Details{Qos: pr.Qos, MessageID: pr.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubrel.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubrel.go new file mode 100644 index 000000000000..27d7d32d81c7 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/pubrel.go @@ -0,0 +1,42 @@ +package packets + +import ( + "fmt" + "io" +) + +//PubrelPacket is an internal representation of the fields of the +//Pubrel MQTT packet +type PubrelPacket struct { + FixedHeader + MessageID uint16 +} + +func (pr *PubrelPacket) String() string { + return fmt.Sprintf("%s MessageID: %d", pr.FixedHeader, pr.MessageID) +} + +func (pr *PubrelPacket) Write(w io.Writer) error { + var err error + pr.FixedHeader.RemainingLength = 2 + packet := pr.FixedHeader.pack() + packet.Write(encodeUint16(pr.MessageID)) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (pr *PubrelPacket) Unpack(b io.Reader) error { + var err error + pr.MessageID, err = decodeUint16(b) + + return err +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (pr *PubrelPacket) Details() Details { + return Details{Qos: pr.Qos, MessageID: pr.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/suback.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/suback.go new file mode 100644 index 000000000000..263e19c65805 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/suback.go @@ -0,0 +1,57 @@ +package packets + +import ( + "bytes" + "fmt" + "io" +) + +//SubackPacket is an internal representation of the fields of the +//Suback MQTT packet +type SubackPacket struct { + FixedHeader + MessageID uint16 + ReturnCodes []byte +} + +func (sa *SubackPacket) String() string { + return fmt.Sprintf("%s MessageID: %d", sa.FixedHeader, sa.MessageID) +} + +func (sa *SubackPacket) Write(w io.Writer) error { + var body bytes.Buffer + var err error + body.Write(encodeUint16(sa.MessageID)) + body.Write(sa.ReturnCodes) + sa.FixedHeader.RemainingLength = body.Len() + packet := sa.FixedHeader.pack() + packet.Write(body.Bytes()) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (sa *SubackPacket) Unpack(b io.Reader) error { + var qosBuffer bytes.Buffer + var err error + sa.MessageID, err = decodeUint16(b) + if err != nil { + return err + } + + _, err = qosBuffer.ReadFrom(b) + if err != nil { + return err + } + sa.ReturnCodes = qosBuffer.Bytes() + + return nil +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (sa *SubackPacket) Details() Details { + return Details{Qos: 0, MessageID: sa.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/subscribe.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/subscribe.go new file mode 100644 index 000000000000..ab0e9734e422 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/subscribe.go @@ -0,0 +1,69 @@ +package packets + +import ( + "bytes" + "fmt" + "io" +) + +//SubscribePacket is an internal representation of the fields of the +//Subscribe MQTT packet +type SubscribePacket struct { + FixedHeader + MessageID uint16 + Topics []string + Qoss []byte +} + +func (s *SubscribePacket) String() string { + return fmt.Sprintf("%s MessageID: %d topics: %s", s.FixedHeader, s.MessageID, s.Topics) +} + +func (s *SubscribePacket) Write(w io.Writer) error { + var body bytes.Buffer + var err error + + body.Write(encodeUint16(s.MessageID)) + for i, topic := range s.Topics { + body.Write(encodeString(topic)) + body.WriteByte(s.Qoss[i]) + } + s.FixedHeader.RemainingLength = body.Len() + packet := s.FixedHeader.pack() + packet.Write(body.Bytes()) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (s *SubscribePacket) Unpack(b io.Reader) error { + var err error + s.MessageID, err = decodeUint16(b) + if err != nil { + return err + } + payloadLength := s.FixedHeader.RemainingLength - 2 + for payloadLength > 0 { + topic, err := decodeString(b) + if err != nil { + return err + } + s.Topics = append(s.Topics, topic) + qos, err := decodeByte(b) + if err != nil { + return err + } + s.Qoss = append(s.Qoss, qos) + payloadLength -= 2 + len(topic) + 1 //2 bytes of string length, plus string, plus 1 byte for Qos + } + + return nil +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (s *SubscribePacket) Details() Details { + return Details{Qos: 1, MessageID: s.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/unsuback.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/unsuback.go new file mode 100644 index 000000000000..c6b2591d1e5d --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/unsuback.go @@ -0,0 +1,42 @@ +package packets + +import ( + "fmt" + "io" +) + +//UnsubackPacket is an internal representation of the fields of the +//Unsuback MQTT packet +type UnsubackPacket struct { + FixedHeader + MessageID uint16 +} + +func (ua *UnsubackPacket) String() string { + return fmt.Sprintf("%s MessageID: %d", ua.FixedHeader, ua.MessageID) +} + +func (ua *UnsubackPacket) Write(w io.Writer) error { + var err error + ua.FixedHeader.RemainingLength = 2 + packet := ua.FixedHeader.pack() + packet.Write(encodeUint16(ua.MessageID)) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (ua *UnsubackPacket) Unpack(b io.Reader) error { + var err error + ua.MessageID, err = decodeUint16(b) + + return err +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (ua *UnsubackPacket) Details() Details { + return Details{Qos: 0, MessageID: ua.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/packets/unsubscribe.go b/vendor/github.com/eclipse/paho.mqtt.golang/packets/unsubscribe.go new file mode 100644 index 000000000000..e7a53bdecaf3 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/packets/unsubscribe.go @@ -0,0 +1,56 @@ +package packets + +import ( + "bytes" + "fmt" + "io" +) + +//UnsubscribePacket is an internal representation of the fields of the +//Unsubscribe MQTT packet +type UnsubscribePacket struct { + FixedHeader + MessageID uint16 + Topics []string +} + +func (u *UnsubscribePacket) String() string { + return fmt.Sprintf("%s MessageID: %d", u.FixedHeader, u.MessageID) +} + +func (u *UnsubscribePacket) Write(w io.Writer) error { + var body bytes.Buffer + var err error + body.Write(encodeUint16(u.MessageID)) + for _, topic := range u.Topics { + body.Write(encodeString(topic)) + } + u.FixedHeader.RemainingLength = body.Len() + packet := u.FixedHeader.pack() + packet.Write(body.Bytes()) + _, err = packet.WriteTo(w) + + return err +} + +//Unpack decodes the details of a ControlPacket after the fixed +//header has been read +func (u *UnsubscribePacket) Unpack(b io.Reader) error { + var err error + u.MessageID, err = decodeUint16(b) + if err != nil { + return err + } + + for topic, err := decodeString(b); err == nil && topic != ""; topic, err = decodeString(b) { + u.Topics = append(u.Topics, topic) + } + + return err +} + +//Details returns a Details struct containing the Qos and +//MessageID of this ControlPacket +func (u *UnsubscribePacket) Details() Details { + return Details{Qos: 1, MessageID: u.MessageID} +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/ping.go b/vendor/github.com/eclipse/paho.mqtt.golang/ping.go new file mode 100644 index 000000000000..dbc1ff454b36 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/ping.go @@ -0,0 +1,69 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "errors" + "sync/atomic" + "time" + + "github.com/eclipse/paho.mqtt.golang/packets" +) + +func keepalive(c *client) { + defer c.workers.Done() + DEBUG.Println(PNG, "keepalive starting") + var checkInterval int64 + var pingSent time.Time + + if c.options.KeepAlive > 10 { + checkInterval = 5 + } else { + checkInterval = c.options.KeepAlive / 2 + } + + intervalTicker := time.NewTicker(time.Duration(checkInterval * int64(time.Second))) + defer intervalTicker.Stop() + + for { + select { + case <-c.stop: + DEBUG.Println(PNG, "keepalive stopped") + return + case <-intervalTicker.C: + lastSent := c.lastSent.Load().(time.Time) + lastReceived := c.lastReceived.Load().(time.Time) + + DEBUG.Println(PNG, "ping check", time.Since(lastSent).Seconds()) + if time.Since(lastSent) >= time.Duration(c.options.KeepAlive*int64(time.Second)) || time.Since(lastReceived) >= time.Duration(c.options.KeepAlive*int64(time.Second)) { + if atomic.LoadInt32(&c.pingOutstanding) == 0 { + DEBUG.Println(PNG, "keepalive sending ping") + ping := packets.NewControlPacket(packets.Pingreq).(*packets.PingreqPacket) + //We don't want to wait behind large messages being sent, the Write call + //will block until it it able to send the packet. + atomic.StoreInt32(&c.pingOutstanding, 1) + ping.Write(c.conn) + c.lastSent.Store(time.Now()) + pingSent = time.Now() + } + } + if atomic.LoadInt32(&c.pingOutstanding) > 0 && time.Since(pingSent) >= c.options.PingTimeout { + CRITICAL.Println(PNG, "pingresp not received, disconnecting") + c.errors <- errors.New("pingresp not received, disconnecting") + return + } + } + } +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/router.go b/vendor/github.com/eclipse/paho.mqtt.golang/router.go new file mode 100644 index 000000000000..dd55e0ddc3da --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/router.go @@ -0,0 +1,181 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "container/list" + "strings" + "sync" + + "github.com/eclipse/paho.mqtt.golang/packets" +) + +// route is a type which associates MQTT Topic strings with a +// callback to be executed upon the arrival of a message associated +// with a subscription to that topic. +type route struct { + topic string + callback MessageHandler +} + +// match takes a slice of strings which represent the route being tested having been split on '/' +// separators, and a slice of strings representing the topic string in the published message, similarly +// split. +// The function determines if the topic string matches the route according to the MQTT topic rules +// and returns a boolean of the outcome +func match(route []string, topic []string) bool { + if len(route) == 0 { + return len(topic) == 0 + } + + if len(topic) == 0 { + return route[0] == "#" + } + + if route[0] == "#" { + return true + } + + if (route[0] == "+") || (route[0] == topic[0]) { + return match(route[1:], topic[1:]) + } + return false +} + +func routeIncludesTopic(route, topic string) bool { + return match(routeSplit(route), strings.Split(topic, "/")) +} + +// removes $share and sharename when splitting the route to allow +// shared subscription routes to correctly match the topic +func routeSplit(route string) []string { + var result []string + if strings.HasPrefix(route, "$share") { + result = strings.Split(route, "/")[2:] + } else { + result = strings.Split(route, "/") + } + return result +} + +// match takes the topic string of the published message and does a basic compare to the +// string of the current Route, if they match it returns true +func (r *route) match(topic string) bool { + return r.topic == topic || routeIncludesTopic(r.topic, topic) +} + +type router struct { + sync.RWMutex + routes *list.List + defaultHandler MessageHandler + messages chan *packets.PublishPacket + stop chan bool +} + +// newRouter returns a new instance of a Router and channel which can be used to tell the Router +// to stop +func newRouter() (*router, chan bool) { + router := &router{routes: list.New(), messages: make(chan *packets.PublishPacket), stop: make(chan bool)} + stop := router.stop + return router, stop +} + +// addRoute takes a topic string and MessageHandler callback. It looks in the current list of +// routes to see if there is already a matching Route. If there is it replaces the current +// callback with the new one. If not it add a new entry to the list of Routes. +func (r *router) addRoute(topic string, callback MessageHandler) { + r.Lock() + defer r.Unlock() + for e := r.routes.Front(); e != nil; e = e.Next() { + if e.Value.(*route).topic == topic { + r := e.Value.(*route) + r.callback = callback + return + } + } + r.routes.PushBack(&route{topic: topic, callback: callback}) +} + +// deleteRoute takes a route string, looks for a matching Route in the list of Routes. If +// found it removes the Route from the list. +func (r *router) deleteRoute(topic string) { + r.Lock() + defer r.Unlock() + for e := r.routes.Front(); e != nil; e = e.Next() { + if e.Value.(*route).topic == topic { + r.routes.Remove(e) + return + } + } +} + +// setDefaultHandler assigns a default callback that will be called if no matching Route +// is found for an incoming Publish. +func (r *router) setDefaultHandler(handler MessageHandler) { + r.Lock() + defer r.Unlock() + r.defaultHandler = handler +} + +// matchAndDispatch takes a channel of Message pointers as input and starts a go routine that +// takes messages off the channel, matches them against the internal route list and calls the +// associated callback (or the defaultHandler, if one exists and no other route matched). If +// anything is sent down the stop channel the function will end. +func (r *router) matchAndDispatch(messages <-chan *packets.PublishPacket, order bool, client *client) { + go func() { + for { + select { + case message := <-messages: + sent := false + r.RLock() + m := messageFromPublish(message, client.ackFunc(message)) + handlers := []MessageHandler{} + for e := r.routes.Front(); e != nil; e = e.Next() { + if e.Value.(*route).match(message.TopicName) { + if order { + handlers = append(handlers, e.Value.(*route).callback) + } else { + hd := e.Value.(*route).callback + go func() { + hd(client, m) + m.Ack() + }() + } + sent = true + } + } + if !sent && r.defaultHandler != nil { + if order { + handlers = append(handlers, r.defaultHandler) + } else { + go func() { + r.defaultHandler(client, m) + m.Ack() + }() + } + } + r.RUnlock() + for _, handler := range handlers { + func() { + handler(client, m) + m.Ack() + }() + } + case <-r.stop: + return + } + } + }() +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/store.go b/vendor/github.com/eclipse/paho.mqtt.golang/store.go new file mode 100644 index 000000000000..24a76b7df3ca --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/store.go @@ -0,0 +1,136 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "fmt" + "strconv" + + "github.com/eclipse/paho.mqtt.golang/packets" +) + +const ( + inboundPrefix = "i." + outboundPrefix = "o." +) + +// Store is an interface which can be used to provide implementations +// for message persistence. +// Because we may have to store distinct messages with the same +// message ID, we need a unique key for each message. This is +// possible by prepending "i." or "o." to each message id +type Store interface { + Open() + Put(key string, message packets.ControlPacket) + Get(key string) packets.ControlPacket + All() []string + Del(key string) + Close() + Reset() +} + +// A key MUST have the form "X.[messageid]" +// where X is 'i' or 'o' +func mIDFromKey(key string) uint16 { + s := key[2:] + i, err := strconv.Atoi(s) + chkerr(err) + return uint16(i) +} + +// Return true if key prefix is outbound +func isKeyOutbound(key string) bool { + return key[:2] == outboundPrefix +} + +// Return true if key prefix is inbound +func isKeyInbound(key string) bool { + return key[:2] == inboundPrefix +} + +// Return a string of the form "i.[id]" +func inboundKeyFromMID(id uint16) string { + return fmt.Sprintf("%s%d", inboundPrefix, id) +} + +// Return a string of the form "o.[id]" +func outboundKeyFromMID(id uint16) string { + return fmt.Sprintf("%s%d", outboundPrefix, id) +} + +// govern which outgoing messages are persisted +func persistOutbound(s Store, m packets.ControlPacket) { + switch m.Details().Qos { + case 0: + switch m.(type) { + case *packets.PubackPacket, *packets.PubcompPacket: + // Sending puback. delete matching publish + // from ibound + s.Del(inboundKeyFromMID(m.Details().MessageID)) + } + case 1: + switch m.(type) { + case *packets.PublishPacket, *packets.PubrelPacket, *packets.SubscribePacket, *packets.UnsubscribePacket: + // Sending publish. store in obound + // until puback received + s.Put(outboundKeyFromMID(m.Details().MessageID), m) + default: + ERROR.Println(STR, "Asked to persist an invalid message type") + } + case 2: + switch m.(type) { + case *packets.PublishPacket: + // Sending publish. store in obound + // until pubrel received + s.Put(outboundKeyFromMID(m.Details().MessageID), m) + default: + ERROR.Println(STR, "Asked to persist an invalid message type") + } + } +} + +// govern which incoming messages are persisted +func persistInbound(s Store, m packets.ControlPacket) { + switch m.Details().Qos { + case 0: + switch m.(type) { + case *packets.PubackPacket, *packets.SubackPacket, *packets.UnsubackPacket, *packets.PubcompPacket: + // Received a puback. delete matching publish + // from obound + s.Del(outboundKeyFromMID(m.Details().MessageID)) + case *packets.PublishPacket, *packets.PubrecPacket, *packets.PingrespPacket, *packets.ConnackPacket: + default: + ERROR.Println(STR, "Asked to persist an invalid messages type") + } + case 1: + switch m.(type) { + case *packets.PublishPacket, *packets.PubrelPacket: + // Received a publish. store it in ibound + // until puback sent + s.Put(inboundKeyFromMID(m.Details().MessageID), m) + default: + ERROR.Println(STR, "Asked to persist an invalid messages type") + } + case 2: + switch m.(type) { + case *packets.PublishPacket: + // Received a publish. store it in ibound + // until pubrel received + s.Put(inboundKeyFromMID(m.Details().MessageID), m) + default: + ERROR.Println(STR, "Asked to persist an invalid messages type") + } + } +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/token.go b/vendor/github.com/eclipse/paho.mqtt.golang/token.go new file mode 100644 index 000000000000..6085546ff6c6 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/token.go @@ -0,0 +1,183 @@ +/* + * Copyright (c) 2014 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Allan Stockdill-Mander + */ + +package mqtt + +import ( + "sync" + "time" + + "github.com/eclipse/paho.mqtt.golang/packets" +) + +// PacketAndToken is a struct that contains both a ControlPacket and a +// Token. This struct is passed via channels between the client interface +// code and the underlying code responsible for sending and receiving +// MQTT messages. +type PacketAndToken struct { + p packets.ControlPacket + t tokenCompletor +} + +// Token defines the interface for the tokens used to indicate when +// actions have completed. +type Token interface { + Wait() bool + WaitTimeout(time.Duration) bool + Error() error +} + +type TokenErrorSetter interface { + setError(error) +} + +type tokenCompletor interface { + Token + TokenErrorSetter + flowComplete() +} + +type baseToken struct { + m sync.RWMutex + complete chan struct{} + err error +} + +// Wait will wait indefinitely for the Token to complete, ie the Publish +// to be sent and confirmed receipt from the broker +func (b *baseToken) Wait() bool { + <-b.complete + return true +} + +// WaitTimeout takes a time.Duration to wait for the flow associated with the +// Token to complete, returns true if it returned before the timeout or +// returns false if the timeout occurred. In the case of a timeout the Token +// does not have an error set in case the caller wishes to wait again +func (b *baseToken) WaitTimeout(d time.Duration) bool { + timer := time.NewTimer(d) + select { + case <-b.complete: + if !timer.Stop() { + <-timer.C + } + return true + case <-timer.C: + } + + return false +} + +func (b *baseToken) flowComplete() { + select { + case <-b.complete: + default: + close(b.complete) + } +} + +func (b *baseToken) Error() error { + b.m.RLock() + defer b.m.RUnlock() + return b.err +} + +func (b *baseToken) setError(e error) { + b.m.Lock() + b.err = e + b.flowComplete() + b.m.Unlock() +} + +func newToken(tType byte) tokenCompletor { + switch tType { + case packets.Connect: + return &ConnectToken{baseToken: baseToken{complete: make(chan struct{})}} + case packets.Subscribe: + return &SubscribeToken{baseToken: baseToken{complete: make(chan struct{})}, subResult: make(map[string]byte)} + case packets.Publish: + return &PublishToken{baseToken: baseToken{complete: make(chan struct{})}} + case packets.Unsubscribe: + return &UnsubscribeToken{baseToken: baseToken{complete: make(chan struct{})}} + case packets.Disconnect: + return &DisconnectToken{baseToken: baseToken{complete: make(chan struct{})}} + } + return nil +} + +// ConnectToken is an extension of Token containing the extra fields +// required to provide information about calls to Connect() +type ConnectToken struct { + baseToken + returnCode byte + sessionPresent bool +} + +// ReturnCode returns the acknowledgement code in the connack sent +// in response to a Connect() +func (c *ConnectToken) ReturnCode() byte { + c.m.RLock() + defer c.m.RUnlock() + return c.returnCode +} + +// SessionPresent returns a bool representing the value of the +// session present field in the connack sent in response to a Connect() +func (c *ConnectToken) SessionPresent() bool { + c.m.RLock() + defer c.m.RUnlock() + return c.sessionPresent +} + +// PublishToken is an extension of Token containing the extra fields +// required to provide information about calls to Publish() +type PublishToken struct { + baseToken + messageID uint16 +} + +// MessageID returns the MQTT message ID that was assigned to the +// Publish packet when it was sent to the broker +func (p *PublishToken) MessageID() uint16 { + return p.messageID +} + +// SubscribeToken is an extension of Token containing the extra fields +// required to provide information about calls to Subscribe() +type SubscribeToken struct { + baseToken + subs []string + subResult map[string]byte + messageID uint16 +} + +// Result returns a map of topics that were subscribed to along with +// the matching return code from the broker. This is either the Qos +// value of the subscription or an error code. +func (s *SubscribeToken) Result() map[string]byte { + s.m.RLock() + defer s.m.RUnlock() + return s.subResult +} + +// UnsubscribeToken is an extension of Token containing the extra fields +// required to provide information about calls to Unsubscribe() +type UnsubscribeToken struct { + baseToken + messageID uint16 +} + +// DisconnectToken is an extension of Token containing the extra fields +// required to provide information about calls to Disconnect() +type DisconnectToken struct { + baseToken +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/topic.go b/vendor/github.com/eclipse/paho.mqtt.golang/topic.go new file mode 100644 index 000000000000..01b536d73c5f --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/topic.go @@ -0,0 +1,86 @@ +/* + * Copyright (c) 2014 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +import ( + "errors" + "strings" +) + +//ErrInvalidQos is the error returned when an packet is to be sent +//with an invalid Qos value +var ErrInvalidQos = errors.New("Invalid QoS") + +//ErrInvalidTopicEmptyString is the error returned when a topic string +//is passed in that is 0 length +var ErrInvalidTopicEmptyString = errors.New("Invalid Topic; empty string") + +//ErrInvalidTopicMultilevel is the error returned when a topic string +//is passed in that has the multi level wildcard in any position but +//the last +var ErrInvalidTopicMultilevel = errors.New("Invalid Topic; multi-level wildcard must be last level") + +// Topic Names and Topic Filters +// The MQTT v3.1.1 spec clarifies a number of ambiguities with regard +// to the validity of Topic strings. +// - A Topic must be between 1 and 65535 bytes. +// - A Topic is case sensitive. +// - A Topic may contain whitespace. +// - A Topic containing a leading forward slash is different than a Topic without. +// - A Topic may be "/" (two levels, both empty string). +// - A Topic must be UTF-8 encoded. +// - A Topic may contain any number of levels. +// - A Topic may contain an empty level (two forward slashes in a row). +// - A TopicName may not contain a wildcard. +// - A TopicFilter may only have a # (multi-level) wildcard as the last level. +// - A TopicFilter may contain any number of + (single-level) wildcards. +// - A TopicFilter with a # will match the absence of a level +// Example: a subscription to "foo/#" will match messages published to "foo". + +func validateSubscribeMap(subs map[string]byte) ([]string, []byte, error) { + if len(subs) == 0 { + return nil, nil, errors.New("Invalid subscription; subscribe map must not be empty") + } + + var topics []string + var qoss []byte + for topic, qos := range subs { + if err := validateTopicAndQos(topic, qos); err != nil { + return nil, nil, err + } + topics = append(topics, topic) + qoss = append(qoss, qos) + } + + return topics, qoss, nil +} + +func validateTopicAndQos(topic string, qos byte) error { + if len(topic) == 0 { + return ErrInvalidTopicEmptyString + } + + levels := strings.Split(topic, "/") + for i, level := range levels { + if level == "#" && i != len(levels)-1 { + return ErrInvalidTopicMultilevel + } + } + + if qos > 2 { + return ErrInvalidQos + } + return nil +} diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/trace.go b/vendor/github.com/eclipse/paho.mqtt.golang/trace.go new file mode 100644 index 000000000000..195c8173dcf0 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/trace.go @@ -0,0 +1,40 @@ +/* + * Copyright (c) 2013 IBM Corp. + * + * All rights reserved. This program and the accompanying materials + * are made available under the terms of the Eclipse Public License v1.0 + * which accompanies this distribution, and is available at + * http://www.eclipse.org/legal/epl-v10.html + * + * Contributors: + * Seth Hoenig + * Allan Stockdill-Mander + * Mike Robertson + */ + +package mqtt + +type ( + // Logger interface allows implementations to provide to this package any + // object that implements the methods defined in it. + Logger interface { + Println(v ...interface{}) + Printf(format string, v ...interface{}) + } + + // NOOPLogger implements the logger that does not perform any operation + // by default. This allows us to efficiently discard the unwanted messages. + NOOPLogger struct{} +) + +func (NOOPLogger) Println(v ...interface{}) {} +func (NOOPLogger) Printf(format string, v ...interface{}) {} + +// Internal levels of library output that are initialised to not print +// anything but can be overridden by programmer +var ( + ERROR Logger = NOOPLogger{} + CRITICAL Logger = NOOPLogger{} + WARN Logger = NOOPLogger{} + DEBUG Logger = NOOPLogger{} +) diff --git a/vendor/github.com/eclipse/paho.mqtt.golang/websocket.go b/vendor/github.com/eclipse/paho.mqtt.golang/websocket.go new file mode 100644 index 000000000000..5e8511e22417 --- /dev/null +++ b/vendor/github.com/eclipse/paho.mqtt.golang/websocket.go @@ -0,0 +1,95 @@ +package mqtt + +import ( + "crypto/tls" + "io" + "net" + "net/http" + "sync" + "time" + + "github.com/gorilla/websocket" +) + +// NewWebsocket returns a new websocket and returns a net.Conn compatible interface using the gorilla/websocket package +func NewWebsocket(host string, tlsc *tls.Config, timeout time.Duration, requestHeader http.Header) (net.Conn, error) { + if timeout == 0 { + timeout = 10 * time.Second + } + + dialer := &websocket.Dialer{ + Proxy: http.ProxyFromEnvironment, + HandshakeTimeout: timeout, + EnableCompression: false, + TLSClientConfig: tlsc, + Subprotocols: []string{"mqtt"}, + } + ws, _, err := dialer.Dial(host, requestHeader) + + if err != nil { + return nil, err + } + + wrapper := &websocketConnector{ + Conn: ws, + } + return wrapper, err +} + +// websocketConnector is a websocket wrapper so it satisfies the net.Conn interface so it is a +// drop in replacement of the golang.org/x/net/websocket package. +// Implementation guide taken from https://github.com/gorilla/websocket/issues/282 +type websocketConnector struct { + *websocket.Conn + r io.Reader + rio sync.Mutex + wio sync.Mutex +} + +// SetDeadline sets both the read and write deadlines +func (c *websocketConnector) SetDeadline(t time.Time) error { + if err := c.SetReadDeadline(t); err != nil { + return err + } + err := c.SetWriteDeadline(t) + return err +} + +// Write writes data to the websocket +func (c *websocketConnector) Write(p []byte) (int, error) { + c.wio.Lock() + defer c.wio.Unlock() + + err := c.WriteMessage(websocket.BinaryMessage, p) + if err != nil { + return 0, err + } + return len(p), nil +} + +// Read reads the current websocket frame +func (c *websocketConnector) Read(p []byte) (int, error) { + c.rio.Lock() + defer c.rio.Unlock() + for { + if c.r == nil { + // Advance to next message. + var err error + _, c.r, err = c.NextReader() + if err != nil { + return 0, err + } + } + n, err := c.r.Read(p) + if err == io.EOF { + // At end of message. + c.r = nil + if n > 0 { + return n, nil + } + // No data read, continue to next message. + continue + } + return n, err + } +} diff --git a/vendor/github.com/godror/godror/CHANGELOG.md b/vendor/github.com/godror/godror/CHANGELOG.md new file mode 100644 index 000000000000..72e335fffd17 --- /dev/null +++ b/vendor/github.com/godror/godror/CHANGELOG.md @@ -0,0 +1,18 @@ +# Changelog +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) +and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). + +## [Unreleased] + +## [0.10.0] +### Added +- onInit parameter in the connection url (and OnInit in ConnectionParams) +- export Drv to be able to register new driver wrapping *godror.Drv. +- ContextWithUserPassw requires a connClass argument, too. + +## [0.9.2] +### Changed +- Make Data embed dpiData, not *dpiData + diff --git a/vendor/gopkg.in/goracle.v2/LICENSE.md b/vendor/github.com/godror/godror/LICENSE.md similarity index 95% rename from vendor/gopkg.in/goracle.v2/LICENSE.md rename to vendor/github.com/godror/godror/LICENSE.md index f634025b67ca..7ca5c6c439fe 100644 --- a/vendor/gopkg.in/goracle.v2/LICENSE.md +++ b/vendor/github.com/godror/godror/LICENSE.md @@ -1,19 +1,12 @@ -goracle +godror ======= -Copyright 2017 Tamás Gulácsi +Copyright 2017, 2018, 2019 Tamás Gulácsi -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. +You can use either the + Apache License, Version 2.0 (APL-2.0), +or the + Universal Permissive License, Version 1.0 (UPL-1.0). ODPI-C ====== diff --git a/vendor/gopkg.in/goracle.v2/NOTES.md b/vendor/github.com/godror/godror/NOTES.md similarity index 100% rename from vendor/gopkg.in/goracle.v2/NOTES.md rename to vendor/github.com/godror/godror/NOTES.md diff --git a/vendor/gopkg.in/goracle.v2/README.md b/vendor/github.com/godror/godror/README.md similarity index 72% rename from vendor/gopkg.in/goracle.v2/README.md rename to vendor/github.com/godror/godror/README.md index 8673ff1a7897..976970869198 100644 --- a/vendor/gopkg.in/goracle.v2/README.md +++ b/vendor/github.com/godror/godror/README.md @@ -1,25 +1,29 @@ -[![Build Status](https://travis-ci.org/go-goracle/goracle.svg?branch=v2)](https://travis-ci.org/go-goracle/goracle) -[![GoDoc](https://godoc.org/gopkg.in/goracle.v2?status.svg)](http://godoc.org/gopkg.in/goracle.v2) -[![Go Report Card](https://goreportcard.com/badge/github.com/go-goracle/goracle)](https://goreportcard.com/report/github.com/go-goracle/goracle) -[![codecov](https://codecov.io/gh/go-goracle/goracle/branch/master/graph/badge.svg)](https://codecov.io/gh/go-goracle/goracle) +[![Travis](https://travis-ci.org/godror/godror.svg?branch=v2)](https://travis-ci.org/godror/godror) +[![CircleCI](https://circleci.com/gh/godror/godror.svg?style=svg)](https://circleci.com/gh/godror/godror) +[![GoDoc](https://godoc.org/github.com/godror/godror?status.svg)](http://godoc.org/github.com/godror/godror) +[![Go Report Card](https://goreportcard.com/badge/github.com/godror/godror)](https://goreportcard.com/report/github.com/godror/godror) +[![codecov](https://codecov.io/gh/godror/godror/branch/master/graph/badge.svg)](https://codecov.io/gh/godror/godror) -# goracle +# Go DRiver for ORacle -[goracle](driver.go) is a package which is a +[godror](https://godoc.org/pkg/github.com/godror/godror) is a package which is a [database/sql/driver.Driver](http://golang.org/pkg/database/sql/driver/#Driver) for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wrapper, [ODPI-C](https://www.github.com/oracle/odpi). At least Go 1.11 is required! +Although an Oracle client is NOT required for compiling, it is at run time. +One can download it from https://www.oracle.com/database/technologies/instant-client/downloads.html + ## Connect -In `sql.Open("goracle", connString)`, you can provide the classic "user/passw@service_name" +In `sql.Open("godror", connString)`, you can provide the classic "user/passw@service_name" as connString, or an URL like "oracle://user:passw@service_name". You can provide all possible options with `ConnectionParams`. Watch out the `ConnectionParams.String()` does redact the password -(for security, to avoid logging it - see https://github.com/go-goracle/goracle/issues/79). +(for security, to avoid logging it - see https://github.com/godror/godror/issues/79). So use `ConnectionParams.StringWithPassword()`. More advanced configurations can be set with a connection string such as: @@ -29,22 +33,22 @@ A configuration like this is how you would add functionality such as load balanc described in parenthesis above can also be set in the `SID` field of `ConnectionParams`. For other possible connection strings, see https://oracle.github.io/node-oracledb/doc/api.html#connectionstrings -and https://docs.oracle.com/en/database/oracle/oracle-database/12.2/netag/configuring-naming-methods.html#GUID-B0437826-43C1-49EC-A94D-B650B6A4A6EE . +and https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=GUID-B0437826-43C1-49EC-A94D-B650B6A4A6EE . TL;DR; the short form is `username@[//]host[:port][/service_name][:server][/instance_name]`, the long form is `(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=host)(PORT=port)) (CONNECT_DATA= (SERVICE_NAME=service_name) (SERVER=server) (INSTANCE_NAME=instance_name)))`. To use heterogeneous pools, set `heterogeneousPool=1` and provide the username/password through -`goracle.ContextWithUserPassw`. +`godror.ContextWithUserPassw`. ## Rationale With Go 1.9, driver-specific things are not needed, everything (I need) can be achieved with the standard _database/sql_ library. Even calling stored procedures with OUT parameters, or sending/retrieving PL/SQL array types - just give a -`goracle.PlSQLArrays` Option within the parameters of `Exec`! +`godror.PlSQLArrays` Option within the parameters of `Exec`! -The array size of the returned PL/SQL arrays can be set with `goracle.ArraySize(2000)` +The array size of the returned PL/SQL arrays can be set with `godror.ArraySize(2000)` * the default is 1024. @@ -56,7 +60,7 @@ Correctness and simplicity is more important than speed, but the underlying ODPI helps a lot with the lower levels, so the performance is not bad. Queries are prefetched (256 rows by default, can be changed by adding a -`goracle.FetchRowCount(1000)` argument to the call of Query), +`godror.FetchRowCount(1000)` argument to the call of Query), but you can speed up INSERT/UPDATE/DELETE statements by providing all the subsequent parameters at once, by putting each param's subsequent elements in a separate slice: @@ -72,28 +76,28 @@ do ## Logging -Goracle uses `github.com/go-kit/kit/log`'s concept of a `Log` function. -Either set `goracle.Log` to a logging function globally, +godror uses `github.com/go-kit/kit/log`'s concept of a `Log` function. +Either set `godror.Log` to a logging function globally, or (better) set the logger in the Context of ExecContext or QueryContext: - db.QueryContext(goracle.ContextWithLog(ctx, logger.Log), qry) + db.QueryContext(godror.ContextWithLog(ctx, logger.Log), qry) ## Tracing To set ClientIdentifier, ClientInfo, Module, Action and DbOp on the session, -to be seen in the Database by the Admin, set goracle.TraceTag on the Context: +to be seen in the Database by the Admin, set godror.TraceTag on the Context: - db.QueryContext(goracle.ContextWithTraceTag(goracle.TraceTag{ + db.QueryContext(godror.ContextWithTraceTag(godror.TraceTag{ Module: "processing", Action: "first", }), qry) ## Extras -To use the goracle-specific functions, you'll need a `*goracle.conn`. -That's what `goracle.DriverConn` is for! +To use the godror-specific functions, you'll need a `*godror.conn`. +That's what `godror.DriverConn` is for! See [z_qrcn_test.go](./z_qrcn_test.go) for using that to reach -[NewSubscription](https://godoc.org/gopkg.in/goracle.v2#Subscription). +[NewSubscription](https://godoc.org/github.com/godror/godror#Subscription). ### Calling stored procedures Use `ExecContext` and mark each OUT parameter with `sql.Out`. @@ -101,7 +105,7 @@ Use `ExecContext` and mark each OUT parameter with `sql.Out`. ### Using cursors returned by stored procedures Use `ExecContext` and an `interface{}` or a `database/sql/driver.Rows` as the `sql.Out` destination, then either use the `driver.Rows` interface, -or transform it into a regular `*sql.Rows` with `goracle.WrapRows`, +or transform it into a regular `*sql.Rows` with `godror.WrapRows`, or (since Go 1.12) just Scan into `*sql.Rows`. For examples, see Anthony Tuininga's @@ -124,7 +128,7 @@ Just use plain old `string` ! ### NUMBER -`NUMBER`s are transferred as `goracle.Number` (which is a `string`) to Go under the hood. +`NUMBER`s are transferred as `godror.Number` (which is a `string`) to Go under the hood. This ensures that we don't lose any precision (Oracle's NUMBER has 38 decimal digits), and `sql.Scan` will hide this and `Scan` into your `int64`, `float64` or `string`, as you wish. @@ -158,11 +162,11 @@ See #121. Just - go get gopkg.in/goracle.v2 + go get github.com/godror/godror Or if you prefer `dep` - dep ensure -add gopkg.in/goracle.v2 + dep ensure -add github.com/godror/godror and you're ready to go! @@ -173,14 +177,14 @@ Note that Windows may need some newer gcc (mingw-w64 with gcc 7.2.0). Just as with other Go projects, you don't want to change the import paths, but you can hack on the library in place, just set up different remotes: - cd $GOPATH.src/gopkg.in/goracle.v2 - git remote add upstream https://github.com/go-goracle/goracle.git + cd $GOPATH.src/github.com/godror/godror + git remote add upstream https://github.com/godror/godror.git git fetch upstream git checkout -b master upstream/master git checkout -f master git pull upstream master - git remote add fork git@github.com:mygithubacc/goracle + git remote add fork git@github.com:mygithubacc/godror git checkout -b newfeature upstream/master Change, experiment as you wish, then @@ -188,7 +192,7 @@ Change, experiment as you wish, then git commit -m 'my great changes' *.go git push fork newfeature -and you're ready to send a GitHub Pull Request from `github.com/mygithubacc/goracle`, `newfeature` branch. +and you're ready to send a GitHub Pull Request from `github.com/mygithubacc/godror`, `newfeature` branch. ### pre-commit diff --git a/vendor/gopkg.in/goracle.v2/conn.go b/vendor/github.com/godror/godror/conn.go similarity index 68% rename from vendor/gopkg.in/goracle.v2/conn.go rename to vendor/github.com/godror/godror/conn.go index e5a9bb6c3a2d..2f5db308c2dc 100644 --- a/vendor/gopkg.in/goracle.v2/conn.go +++ b/vendor/github.com/godror/godror/conn.go @@ -1,19 +1,9 @@ // Copyright 2019 Tamás Gulácsi // // -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -package goracle +package godror /* #include @@ -22,16 +12,18 @@ package goracle import "C" import ( + "bytes" "context" "database/sql" "database/sql/driver" "io" + "strconv" "strings" "sync" "time" "unsafe" - "github.com/pkg/errors" + errors "golang.org/x/xerrors" ) const getConnection = "--GET_CONNECTION--" @@ -40,7 +32,7 @@ const wrapResultset = "--WRAP_RESULTSET--" // The maximum capacity is limited to (2^32 / sizeof(dpiData))-1 to remain compatible // with 32-bit platforms. The size of a `C.dpiData` is 32 Byte on a 64-bit system, `C.dpiSubscrMessageTable` is 40 bytes. // So this is 2^25. -// See https://github.com/go-goracle/goracle/issues/73#issuecomment-401281714 +// See https://github.com/go-godror/godror/issues/73#issuecomment-401281714 const maxArraySize = (1<<30)/C.sizeof_dpiSubscrMessageTable - 1 var _ = driver.Conn((*conn)(nil)) @@ -48,19 +40,22 @@ var _ = driver.ConnBeginTx((*conn)(nil)) var _ = driver.ConnPrepareContext((*conn)(nil)) var _ = driver.Pinger((*conn)(nil)) +//var _ = driver.ExecerContext((*conn)(nil)) + type conn struct { - connParams ConnectionParams currentTT TraceTag + connParams ConnectionParams Client, Server VersionInfo tranParams tranParams - sync.RWMutex - currentUser string - *drv - dpiConn *C.dpiConn - inTransaction bool - newSession bool - timeZone *time.Location - tzOffSecs int + mu sync.RWMutex + currentUser string + drv *drv + dpiConn *C.dpiConn + timeZone *time.Location + objTypes map[string]ObjectType + tzOffSecs int + inTransaction bool + newSession bool } func (c *conn) getError() error { @@ -71,17 +66,19 @@ func (c *conn) getError() error { } func (c *conn) Break() error { - c.RLock() - defer c.RUnlock() + c.mu.RLock() + defer c.mu.RUnlock() if Log != nil { Log("msg", "Break", "dpiConn", c.dpiConn) } if C.dpiConn_breakExecution(c.dpiConn) == C.DPI_FAILURE { - return maybeBadConn(errors.Wrap(c.getError(), "Break")) + return maybeBadConn(errors.Errorf("Break: %w", c.getError()), c) } return nil } +func (c *conn) ClientVersion() (VersionInfo, error) { return c.drv.ClientVersion() } + // Ping checks the connection's state. // // WARNING: as database/sql calls database/sql/driver.Open when it needs @@ -95,14 +92,14 @@ func (c *conn) Ping(ctx context.Context) error { if err := c.ensureContextUser(ctx); err != nil { return err } - c.RLock() - defer c.RUnlock() + c.mu.RLock() + defer c.mu.RUnlock() done := make(chan error, 1) go func() { defer close(done) failure := C.dpiConn_ping(c.dpiConn) == C.DPI_FAILURE if failure { - done <- maybeBadConn(errors.Wrap(c.getError(), "Ping")) + done <- maybeBadConn(errors.Errorf("Ping: %w", c.getError()), c) return } done <- nil @@ -118,6 +115,7 @@ func (c *conn) Ping(ctx context.Context) error { return err default: _ = c.Break() + c.close(true) return driver.ErrBadConn } } @@ -140,30 +138,51 @@ func (c *conn) Close() error { if c == nil { return nil } - c.Lock() - defer c.Unlock() + c.mu.Lock() + defer c.mu.Unlock() + return c.close(true) +} + +func (c *conn) close(doNotReuse bool) error { + if c == nil { + return nil + } c.setTraceTag(TraceTag{}) - dpiConn := c.dpiConn - c.dpiConn = nil + dpiConn, objTypes := c.dpiConn, c.objTypes + c.dpiConn, c.objTypes = nil, nil if dpiConn == nil { return nil } + defer C.dpiConn_release(dpiConn) + + seen := make(map[string]struct{}, len(objTypes)) + for _, o := range objTypes { + nm := o.FullName() + if _, seen := seen[nm]; seen { + continue + } + seen[nm] = struct{}{} + o.close(doNotReuse) + } + if !doNotReuse { + return nil + } + // Just to be sure, break anything in progress. done := make(chan struct{}) go func() { select { case <-done: case <-time.After(10 * time.Second): + if Log != nil { + Log("msg", "TIMEOUT releasing connection") + } C.dpiConn_breakExecution(dpiConn) } }() - rc := C.dpiConn_release(dpiConn) + C.dpiConn_close(dpiConn, C.DPI_MODE_CONN_CLOSE_DROP, nil, 0) close(done) - var err error - if rc == C.DPI_FAILURE { - err = maybeBadConn(errors.Wrap(c.getError(), "Close")) - } - return err + return nil } // Begin starts and returns a new transaction. @@ -210,7 +229,7 @@ func (c *conn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, e case sql.LevelSerializable: todo.Level = trLS default: - return nil, errors.Errorf("%v isolation level is not supported", sql.IsolationLevel(opts.Isolation)) + return nil, errors.Errorf("isolation level is not supported: %s", sql.IsolationLevel(opts.Isolation)) } if todo != c.tranParams { @@ -229,25 +248,25 @@ func (c *conn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, e stmt.Close() } if err != nil { - return nil, maybeBadConn(errors.Wrap(err, qry)) + return nil, maybeBadConn(errors.Errorf("%s: %w", qry, err), c) } } c.tranParams = todo } - c.RLock() + c.mu.RLock() inTran := c.inTransaction - c.RUnlock() + c.mu.RUnlock() if inTran { return nil, errors.New("already in transaction") } - c.Lock() + c.mu.Lock() c.inTransaction = true - c.Unlock() + c.mu.Unlock() if tt, ok := ctx.Value(traceTagCtxKey).(TraceTag); ok { - c.Lock() + c.mu.Lock() c.setTraceTag(tt) - c.Unlock() + c.mu.Unlock() } return c, nil } @@ -267,9 +286,9 @@ func (c *conn) PrepareContext(ctx context.Context, query string) (driver.Stmt, e return nil, err } if tt, ok := ctx.Value(traceTagCtxKey).(TraceTag); ok { - c.Lock() + c.mu.Lock() c.setTraceTag(tt) - c.Unlock() + c.mu.Unlock() } if query == getConnection { if Log != nil { @@ -282,13 +301,13 @@ func (c *conn) PrepareContext(ctx context.Context, query string) (driver.Stmt, e defer func() { C.free(unsafe.Pointer(cSQL)) }() - c.RLock() - defer c.RUnlock() + c.mu.RLock() + defer c.mu.RUnlock() var dpiStmt *C.dpiStmt if C.dpiConn_prepareStmt(c.dpiConn, 0, cSQL, C.uint32_t(len(query)), nil, 0, (**C.dpiStmt)(unsafe.Pointer(&dpiStmt)), ) == C.DPI_FAILURE { - return nil, maybeBadConn(errors.Wrap(c.getError(), "Prepare: "+query)) + return nil, maybeBadConn(errors.Errorf("Prepare: %s: %w", query, c.getError()), c) } return &statement{conn: c, dpiStmt: dpiStmt, query: query}, nil } @@ -299,7 +318,7 @@ func (c *conn) Rollback() error { return c.endTran(false) } func (c *conn) endTran(isCommit bool) error { - c.Lock() + c.mu.Lock() c.inTransaction = false c.tranParams = tranParams{} @@ -307,15 +326,15 @@ func (c *conn) endTran(isCommit bool) error { //msg := "Commit" if isCommit { if C.dpiConn_commit(c.dpiConn) == C.DPI_FAILURE { - err = maybeBadConn(errors.Wrap(c.getError(), "Commit")) + err = maybeBadConn(errors.Errorf("Commit: %w", c.getError()), c) } } else { //msg = "Rollback" if C.dpiConn_rollback(c.dpiConn) == C.DPI_FAILURE { - err = maybeBadConn(errors.Wrap(c.getError(), "Rollback")) + err = maybeBadConn(errors.Errorf("Rollback: %w", c.getError()), c) } } - c.Unlock() + c.mu.Unlock() //fmt.Printf("%p.%s\n", c, msg) return err } @@ -350,7 +369,7 @@ func (c *conn) newVar(vi varInfo) (*C.dpiVar, []C.dpiData, error) { isArray, vi.ObjectType, &v, &dataArr, ) == C.DPI_FAILURE { - return nil, nil, errors.Wrapf(c.getError(), "newVar(typ=%d, natTyp=%d, sliceLen=%d, bufSize=%d)", vi.Typ, vi.NatTyp, vi.SliceLen, vi.BufSize) + return nil, nil, errors.Errorf("newVar(typ=%d, natTyp=%d, sliceLen=%d, bufSize=%d): %w", vi.Typ, vi.NatTyp, vi.SliceLen, vi.BufSize, c.getError()) } // https://github.com/golang/go/wiki/cgo#Turning_C_arrays_into_Go_slices /* @@ -368,71 +387,184 @@ func (c *conn) ServerVersion() (VersionInfo, error) { return c.Server, nil } -func (c *conn) init() error { +func (c *conn) init(onInit []string) error { if c.Client.Version == 0 { var err error if c.Client, err = c.drv.ClientVersion(); err != nil { return err } } + + if err := c.initVersionTZ(); err != nil || len(onInit) == 0 || !c.newSession { + return err + } + if Log != nil { + Log("newSession", c.newSession, "onInit", onInit) + } + ctx, cancel := context.WithTimeout(context.Background(), 3*time.Duration(len(onInit))*time.Second) + defer cancel() + if Log != nil { + Log("doOnInit", len(onInit)) + } + for _, qry := range onInit { + if Log != nil { + Log("onInit", qry) + } + st, err := c.PrepareContext(ctx, qry) + if err != nil { + return errors.Errorf("%s: %w", qry, err) + } + _, err = st.Exec(nil) //lint:ignore SA1019 - it's hard to use ExecContext here + st.Close() + if err != nil { + return errors.Errorf("%s: %w", qry, err) + } + } + return nil + } + +func (c *conn) initVersionTZ() error { if c.Server.Version == 0 { var v C.dpiVersionInfo var release *C.char var releaseLen C.uint32_t if C.dpiConn_getServerVersion(c.dpiConn, &release, &releaseLen, &v) == C.DPI_FAILURE { - return errors.Wrap(c.getError(), "getServerVersion") + if c.connParams.IsPrelim { + return nil + } + return errors.Errorf("getServerVersion: %w", c.getError()) } c.Server.set(&v) - c.Server.ServerRelease = C.GoStringN(release, C.int(releaseLen)) + c.Server.ServerRelease = string(bytes.Replace( + ((*[maxArraySize]byte)(unsafe.Pointer(release)))[:releaseLen:releaseLen], + []byte{'\n'}, []byte{';', ' '}, -1)) } - if c.timeZone != nil { + if c.timeZone != nil && (c.timeZone != time.Local || c.tzOffSecs != 0) { return nil } c.timeZone = time.Local - _, c.tzOffSecs = (time.Time{}).In(c.timeZone).Zone() + _, c.tzOffSecs = time.Now().In(c.timeZone).Zone() + if Log != nil { + Log("tz", c.timeZone, "offSecs", c.tzOffSecs) + } - const qry = "SELECT DBTIMEZONE FROM DUAL" + // DBTIMEZONE is useless, false, and misdirecting! + // https://stackoverflow.com/questions/52531137/sysdate-and-dbtimezone-different-in-oracle-database + const qry = "SELECT DBTIMEZONE, LTRIM(REGEXP_SUBSTR(TO_CHAR(SYSTIMESTAMP), ' [^ ]+$')) FROM DUAL" ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second) defer cancel() st, err := c.PrepareContext(ctx, qry) if err != nil { - return errors.Wrap(err, qry) + return errors.Errorf("%s: %w", qry, err) } defer st.Close() - rows, err := st.Query([]driver.Value{}) + rows, err := st.Query(nil) //lint:ignore SA1019 - it's hard to use QueryContext here if err != nil { - return errors.Wrap(err, qry) + if Log != nil { + Log("qry", qry, "error", err) + } + return nil } defer rows.Close() - var timezone string - vals := []driver.Value{timezone} - for { - if err = rows.Next(vals); err != nil { - if err == io.EOF { - break + var dbTZ, timezone string + vals := []driver.Value{dbTZ, timezone} + if err = rows.Next(vals); err != nil && err != io.EOF { + return errors.Errorf("%s: %w", qry, err) + } + dbTZ = vals[0].(string) + timezone = vals[1].(string) + + tz, off, err := calculateTZ(dbTZ, timezone) + if Log != nil { + Log("timezone", timezone, "tz", tz, "offSecs", off) + } + if err != nil || tz == nil { + return err + } + c.timeZone, c.tzOffSecs = tz, off + + return nil +} + +func calculateTZ(dbTZ, timezone string) (*time.Location, int, error) { + if Log != nil { + Log("dbTZ", dbTZ, "timezone", timezone) + } + var tz *time.Location + now := time.Now() + _, localOff := time.Now().Local().Zone() + off := localOff + var ok bool + var err error + if dbTZ != "" && strings.Contains(dbTZ, "/") { + tz, err = time.LoadLocation(dbTZ) + if ok = err == nil; ok { + if tz == time.Local { + return tz, off, nil } - return errors.Wrap(err, qry) + _, off = now.In(tz).Zone() + } else if Log != nil { + Log("LoadLocation", dbTZ, "error", err) } - timezone = strings.TrimSpace(vals[0].(string)) + } + if !ok { if timezone != "" { - break + if off, err = parseTZ(timezone); err != nil { + return tz, off, errors.Errorf("%s: %w", timezone, err) + } + } else if off, err = parseTZ(dbTZ); err != nil { + return tz, off, errors.Errorf("%s: %w", dbTZ, err) } } - if timezone == "" { - return errors.New("empty DBTIMEZONE") + // This is dangerous, but I just cannot get whether the DB time zone + // setting has DST or not - DBTIMEZONE returns just a fixed offset. + if off != localOff && tz == nil { + tz = time.FixedZone(timezone, off) + } + return tz, off, nil +} +func parseTZ(s string) (int, error) { + s = strings.TrimSpace(s) + if s == "" { + return 0, io.EOF + } + if s == "Z" || s == "UTC" { + return 0, nil + } + var tz int + var ok bool + if i := strings.IndexByte(s, ':'); i >= 0 { + if i64, err := strconv.ParseInt(s[i+1:], 10, 6); err != nil { + return tz, errors.Errorf("%s: %w", s, err) + } else { + tz = int(i64 * 60) + } + s = s[:i] + ok = true } - if off, err := parseTZ(timezone); err != nil { - return errors.Wrap(err, timezone) + if !ok { + if i := strings.IndexByte(s, '/'); i >= 0 { + targetLoc, err := time.LoadLocation(s) + if err != nil { + return tz, errors.Errorf("%s: %w", s, err) + } + + _, localOffset := time.Now().In(targetLoc).Zone() + + tz = localOffset + return tz, nil + } + } + if i64, err := strconv.ParseInt(s, 10, 5); err != nil { + return tz, errors.Errorf("%s: %w", s, err) } else { - // This is dangerous, but I just cannot get whether the DB time zone - // setting has DST or not - DBTIMEZONE returns just a fixed offset. - if _, localOff := time.Now().Local().Zone(); localOff != off { - c.tzOffSecs = off - c.timeZone = time.FixedZone(timezone, c.tzOffSecs) + if i64 < 0 { + tz = -tz } + tz += int(i64 * 3600) } - return nil + return tz, nil } func (c *conn) setCallTimeout(ctx context.Context) { @@ -447,21 +579,34 @@ func (c *conn) setCallTimeout(ctx context.Context) { C.dpiConn_setCallTimeout(c.dpiConn, ms) } -func maybeBadConn(err error) error { +// maybeBadConn checks whether the error is because of a bad connection, and returns driver.ErrBadConn, +// as database/sql requires. +// +// Also in this case, iff c != nil, closes it. +func maybeBadConn(err error, c *conn) error { if err == nil { return nil } - root := errors.Cause(err) - if root == driver.ErrBadConn { - return root + cl := func() {} + if c != nil { + cl = func() { + if Log != nil { + Log("msg", "maybeBadConn close", "conn", c) + } + c.close(true) + } + } + if errors.Is(err, driver.ErrBadConn) { + cl() + return driver.ErrBadConn } - if cd, ok := root.(interface { - Code() int - }); ok { + var cd interface{ Code() int } + if errors.As(err, &cd) { // Yes, this is copied from rana/ora, but I've put it there, so it's mine. @tgulacsi switch cd.Code() { case 0: if strings.Contains(err.Error(), " DPI-1002: ") { + cl() return driver.ErrBadConn } // cases by experience: @@ -500,6 +645,7 @@ func maybeBadConn(err error) error { 27146, // post/wait initialization failed 28511, // lost RPC connection 56600: // an illegal OCI function call was issued + cl() return driver.ErrBadConn } } @@ -542,7 +688,7 @@ func (c *conn) setTraceTag(tt TraceTag) error { C.free(unsafe.Pointer(s)) } if rc == C.DPI_FAILURE { - return errors.Wrap(c.getError(), nm) + return errors.Errorf("%s: %w", nm, c.getError()) } } c.currentTT = tt @@ -576,35 +722,26 @@ const userpwCtxKey = ctxKey("userPw") // ContextWithUserPassw returns a context with the specified user and password, // to be used with heterogeneous pools. -func ContextWithUserPassw(ctx context.Context, user, password string) context.Context { - return context.WithValue(ctx, userpwCtxKey, [2]string{user, password}) +func ContextWithUserPassw(ctx context.Context, user, password, connClass string) context.Context { + return context.WithValue(ctx, userpwCtxKey, [3]string{user, password, connClass}) } func (c *conn) ensureContextUser(ctx context.Context) error { - if !c.connParams.HeterogeneousPool { + if !(c.connParams.HeterogeneousPool || c.connParams.StandaloneConnection) { return nil } - - var up [2]string - var ok bool - if up, ok = ctx.Value(userpwCtxKey).([2]string); !ok || up[0] == c.currentUser { + up, ok := ctx.Value(userpwCtxKey).([3]string) + if !ok || up[0] == c.currentUser { return nil } if c.dpiConn != nil { - if err := c.Close(); err != nil { + if err := c.close(false); err != nil { return driver.ErrBadConn } } - c.Lock() - defer c.Unlock() - - if err := c.acquireConn(up[0], up[1]); err != nil { - return err - } - - return c.init() + return c.acquireConn(up[0], up[1], up[2]) } // StartupMode for the database. @@ -625,7 +762,7 @@ const ( // See https://docs.oracle.com/en/database/oracle/oracle-database/18/lnoci/database-startup-and-shutdown.html#GUID-44B24F65-8C24-4DF3-8FBF-B896A4D6F3F3 func (c *conn) Startup(mode StartupMode) error { if C.dpiConn_startupDatabase(c.dpiConn, C.dpiStartupMode(mode)) == C.DPI_FAILURE { - return errors.Wrapf(c.getError(), "startup(%v)", mode) + return errors.Errorf("startup(%v): %w", mode, c.getError()) } return nil } @@ -654,7 +791,12 @@ const ( // See https://docs.oracle.com/en/database/oracle/oracle-database/18/lnoci/database-startup-and-shutdown.html#GUID-44B24F65-8C24-4DF3-8FBF-B896A4D6F3F3 func (c *conn) Shutdown(mode ShutdownMode) error { if C.dpiConn_shutdownDatabase(c.dpiConn, C.dpiShutdownMode(mode)) == C.DPI_FAILURE { - return errors.Wrapf(c.getError(), "shutdown(%v)", mode) + return errors.Errorf("shutdown(%v): %w", mode, c.getError()) } return nil } + +// Timezone returns the connection's timezone. +func (c *conn) Timezone() *time.Location { + return c.timeZone +} diff --git a/vendor/github.com/godror/godror/contrib/free.db/cwallet.sso b/vendor/github.com/godror/godror/contrib/free.db/cwallet.sso new file mode 100644 index 000000000000..c9eeffc2ffed Binary files /dev/null and b/vendor/github.com/godror/godror/contrib/free.db/cwallet.sso differ diff --git a/vendor/github.com/godror/godror/contrib/free.db/env.sh b/vendor/github.com/godror/godror/contrib/free.db/env.sh new file mode 100644 index 000000000000..02c93a50d9a2 --- /dev/null +++ b/vendor/github.com/godror/godror/contrib/free.db/env.sh @@ -0,0 +1,5 @@ +export TNS_ADMIN="$(dirname "$(find "$PWD" -type f -name tnsnames.ora | sort -r | head -n1)")" +export GODROR_TEST_USERNAME=test +export GODROR_TEST_PASSWORD=r97oUPimsmTOIcBaeeDF +export GODROR_TEST_DB=free_high +export GODROR_TEST_STANDALONE=1 diff --git a/vendor/github.com/godror/godror/contrib/free.db/ewallet.p12 b/vendor/github.com/godror/godror/contrib/free.db/ewallet.p12 new file mode 100644 index 000000000000..b7a8ff0450d1 Binary files /dev/null and b/vendor/github.com/godror/godror/contrib/free.db/ewallet.p12 differ diff --git a/vendor/github.com/godror/godror/contrib/free.db/keystore.jks b/vendor/github.com/godror/godror/contrib/free.db/keystore.jks new file mode 100644 index 000000000000..1923759914e9 Binary files /dev/null and b/vendor/github.com/godror/godror/contrib/free.db/keystore.jks differ diff --git a/vendor/github.com/godror/godror/contrib/free.db/ojdbc.properties b/vendor/github.com/godror/godror/contrib/free.db/ojdbc.properties new file mode 100644 index 000000000000..9fc350ca4256 --- /dev/null +++ b/vendor/github.com/godror/godror/contrib/free.db/ojdbc.properties @@ -0,0 +1 @@ +oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=${TNS_ADMIN}))) \ No newline at end of file diff --git a/vendor/github.com/godror/godror/contrib/free.db/reset.sql b/vendor/github.com/godror/godror/contrib/free.db/reset.sql new file mode 100644 index 000000000000..c86661434558 --- /dev/null +++ b/vendor/github.com/godror/godror/contrib/free.db/reset.sql @@ -0,0 +1,15 @@ +WHENEVER SQLERROR CONTINUE + +DROP USER test CASCADE; + +WHENEVER SQLERROR EXIT SQL.SQLCODE ROLLBACK + +CREATE USER test IDENTIFIED BY r97oUPimsmTOIcBaeeDF; +ALTER USER test QUOTA 100m ON data; +GRANT create session, create table, create type, create sequence, create synonym, create procedure, change notification TO test; +GRANT EXECUTE ON SYS.DBMS_AQ TO test; +GRANT EXECUTE ON SYS.DBMS_AQADM TO test; + +GRANT create user, drop user, alter user TO test; +GRANT connect TO test WITH admin option; +GRANT create session TO test WITH admin option; diff --git a/vendor/github.com/godror/godror/contrib/free.db/sqlnet.ora b/vendor/github.com/godror/godror/contrib/free.db/sqlnet.ora new file mode 100644 index 000000000000..260f677fcde1 --- /dev/null +++ b/vendor/github.com/godror/godror/contrib/free.db/sqlnet.ora @@ -0,0 +1,2 @@ +WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin"))) +SSL_SERVER_DN_MATCH=yes \ No newline at end of file diff --git a/vendor/github.com/godror/godror/contrib/free.db/tnsnames.ora b/vendor/github.com/godror/godror/contrib/free.db/tnsnames.ora new file mode 100644 index 000000000000..1b10c58ee0a3 --- /dev/null +++ b/vendor/github.com/godror/godror/contrib/free.db/tnsnames.ora @@ -0,0 +1,6 @@ +free_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=zo0svnycldsrgbw_db201911301540_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US"))) + +free_low = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=zo0svnycldsrgbw_db201911301540_low.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US"))) + +free_medium = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=zo0svnycldsrgbw_db201911301540_medium.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US"))) + diff --git a/vendor/github.com/godror/godror/contrib/free.db/truststore.jks b/vendor/github.com/godror/godror/contrib/free.db/truststore.jks new file mode 100644 index 000000000000..55faa40478b5 Binary files /dev/null and b/vendor/github.com/godror/godror/contrib/free.db/truststore.jks differ diff --git a/vendor/github.com/godror/godror/contrib/oracle-instant-client/Dockerfile b/vendor/github.com/godror/godror/contrib/oracle-instant-client/Dockerfile new file mode 100644 index 000000000000..45af9abff4b1 --- /dev/null +++ b/vendor/github.com/godror/godror/contrib/oracle-instant-client/Dockerfile @@ -0,0 +1,14 @@ +FROM debian:testing + +LABEL maintainer="t.gulacsi@unosoft.hu" + +ENV DEBIAN_FRONTEND noninteractive + +RUN apt-get update && apt-get install -y libaio1 wget unzip + +RUN wget -O /tmp/instantclient-basic-linux-x64.zip https://download.oracle.com/otn_software/linux/instantclient/193000/instantclient-basic-linux.x64-19.3.0.0.0dbru.zip + +RUN mkdir -p /usr/lib/oracle && unzip /tmp/instantclient-basic-linux-x64.zip -d /usr/lib/oracle + +RUN ldconfig -v /usr/lib/oracle/instantclient_19_3 +RUN ldd /usr/lib/oracle/instantclient_19_3/libclntsh.so diff --git a/vendor/gopkg.in/goracle.v2/contrib/oracle-instant-client/README.md b/vendor/github.com/godror/godror/contrib/oracle-instant-client/README.md similarity index 100% rename from vendor/gopkg.in/goracle.v2/contrib/oracle-instant-client/README.md rename to vendor/github.com/godror/godror/contrib/oracle-instant-client/README.md diff --git a/vendor/gopkg.in/goracle.v2/contrib/oracle-xe-18c/Dockerfile b/vendor/github.com/godror/godror/contrib/oracle-xe-18c/Dockerfile similarity index 100% rename from vendor/gopkg.in/goracle.v2/contrib/oracle-xe-18c/Dockerfile rename to vendor/github.com/godror/godror/contrib/oracle-xe-18c/Dockerfile diff --git a/vendor/gopkg.in/goracle.v2/contrib/oracle-xe-18c/README.md b/vendor/github.com/godror/godror/contrib/oracle-xe-18c/README.md similarity index 100% rename from vendor/gopkg.in/goracle.v2/contrib/oracle-xe-18c/README.md rename to vendor/github.com/godror/godror/contrib/oracle-xe-18c/README.md diff --git a/vendor/github.com/godror/godror/data.go b/vendor/github.com/godror/godror/data.go new file mode 100644 index 000000000000..c42f3ec189a8 --- /dev/null +++ b/vendor/github.com/godror/godror/data.go @@ -0,0 +1,468 @@ +// Copyright 2017 Tamás Gulácsi +// +// +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 + +package godror + +/* +#include +#include "dpiImpl.h" +*/ +import "C" +import ( + "database/sql" + "database/sql/driver" + "fmt" + "reflect" + "time" + "unsafe" + + errors "golang.org/x/xerrors" +) + +// Data holds the data to/from Oracle. +type Data struct { + ObjectType ObjectType + dpiData C.dpiData + implicitObj bool + NativeTypeNum C.dpiNativeTypeNum +} + +var ErrNotSupported = errors.New("not supported") + +// NewData creates a new Data structure for the given type, populated with the given type. +func NewData(v interface{}) (*Data, error) { + if v == nil { + return nil, errors.Errorf("%s: %w", "nil type", ErrNotSupported) + } + data := Data{dpiData: C.dpiData{isNull: 1}} + return &data, data.Set(v) +} + +// IsNull returns whether the data is null. +func (d *Data) IsNull() bool { + // Use of C.dpiData_getIsNull(&d.dpiData) would be safer, + // but ODPI-C 3.1.4 just returns dpiData->isNull, so do the same + // without calling CGO. + return d == nil || d.dpiData.isNull == 1 +} + +// SetNull sets the value of the data to be the null value. +func (d *Data) SetNull() { + if !d.IsNull() { + // Maybe C.dpiData_setNull(&d.dpiData) would be safer, but as we don't use C.dpiData_getIsNull, + // and those functions (at least in ODPI-C 3.1.4) just operate on data->isNull directly, + // don't use CGO if possible. + d.dpiData.isNull = 1 + } +} + +// GetBool returns the bool data. +func (d *Data) GetBool() bool { + return !d.IsNull() && C.dpiData_getBool(&d.dpiData) == 1 +} + +// SetBool sets the data as bool. +func (d *Data) SetBool(b bool) { + var i C.int + if b { + i = 1 + } + C.dpiData_setBool(&d.dpiData, i) +} + +// GetBytes returns the []byte from the data. +func (d *Data) GetBytes() []byte { + if d.IsNull() { + return nil + } + b := C.dpiData_getBytes(&d.dpiData) + if b.ptr == nil || b.length == 0 { + return nil + } + return ((*[32767]byte)(unsafe.Pointer(b.ptr)))[:b.length:b.length] +} + +// SetBytes set the data as []byte. +func (d *Data) SetBytes(b []byte) { + if b == nil { + d.dpiData.isNull = 1 + return + } + C.dpiData_setBytes(&d.dpiData, (*C.char)(unsafe.Pointer(&b[0])), C.uint32_t(len(b))) +} + +// GetFloat32 gets float32 from the data. +func (d *Data) GetFloat32() float32 { + if d.IsNull() { + return 0 + } + return float32(C.dpiData_getFloat(&d.dpiData)) +} + +// SetFloat32 sets the data as float32. +func (d *Data) SetFloat32(f float32) { + C.dpiData_setFloat(&d.dpiData, C.float(f)) +} + +// GetFloat64 gets float64 from the data. +func (d *Data) GetFloat64() float64 { + //fmt.Println("GetFloat64", d.IsNull(), d) + if d.IsNull() { + return 0 + } + return float64(C.dpiData_getDouble(&d.dpiData)) +} + +// SetFloat64 sets the data as float64. +func (d *Data) SetFloat64(f float64) { + C.dpiData_setDouble(&d.dpiData, C.double(f)) +} + +// GetInt64 gets int64 from the data. +func (d *Data) GetInt64() int64 { + if d.IsNull() { + return 0 + } + return int64(C.dpiData_getInt64(&d.dpiData)) +} + +// SetInt64 sets the data as int64. +func (d *Data) SetInt64(i int64) { + C.dpiData_setInt64(&d.dpiData, C.int64_t(i)) +} + +// GetIntervalDS gets duration as interval date-seconds from data. +func (d *Data) GetIntervalDS() time.Duration { + if d.IsNull() { + return 0 + } + ds := C.dpiData_getIntervalDS(&d.dpiData) + return time.Duration(ds.days)*24*time.Hour + + time.Duration(ds.hours)*time.Hour + + time.Duration(ds.minutes)*time.Minute + + time.Duration(ds.seconds)*time.Second + + time.Duration(ds.fseconds) +} + +// SetIntervalDS sets the duration as interval date-seconds to data. +func (d *Data) SetIntervalDS(dur time.Duration) { + C.dpiData_setIntervalDS(&d.dpiData, + C.int32_t(int64(dur.Hours())/24), + C.int32_t(int64(dur.Hours())%24), C.int32_t(dur.Minutes()), C.int32_t(dur.Seconds()), + C.int32_t(dur.Nanoseconds()), + ) +} + +// GetIntervalYM gets IntervalYM from the data. +func (d *Data) GetIntervalYM() IntervalYM { + if d.IsNull() { + return IntervalYM{} + } + ym := C.dpiData_getIntervalYM(&d.dpiData) + return IntervalYM{Years: int(ym.years), Months: int(ym.months)} +} + +// SetIntervalYM sets IntervalYM to the data. +func (d *Data) SetIntervalYM(ym IntervalYM) { + C.dpiData_setIntervalYM(&d.dpiData, C.int32_t(ym.Years), C.int32_t(ym.Months)) +} + +// GetLob gets data as Lob. +func (d *Data) GetLob() *Lob { + if d.IsNull() { + return nil + } + return &Lob{Reader: &dpiLobReader{dpiLob: C.dpiData_getLOB(&d.dpiData)}} +} + +// SetLob sets Lob to the data. +func (d *Data) SetLob(lob *DirectLob) { + C.dpiData_setLOB(&d.dpiData, lob.dpiLob) +} + +// GetObject gets Object from data. +// +// As with all Objects, you MUST call Close on it when not needed anymore! +func (d *Data) GetObject() *Object { + if d == nil { + panic("null") + } + if d.IsNull() { + return nil + } + + o := C.dpiData_getObject(&d.dpiData) + if o == nil { + return nil + } + if !d.implicitObj { + if C.dpiObject_addRef(o) == C.DPI_FAILURE { + panic(d.ObjectType.getError()) + } + } + obj := &Object{dpiObject: o, ObjectType: d.ObjectType} + obj.init() + return obj +} + +// SetObject sets Object to data. +func (d *Data) SetObject(o *Object) { + C.dpiData_setObject(&d.dpiData, o.dpiObject) +} + +// GetStmt gets Stmt from data. +func (d *Data) GetStmt() driver.Stmt { + if d.IsNull() { + return nil + } + return &statement{dpiStmt: C.dpiData_getStmt(&d.dpiData)} +} + +// SetStmt sets Stmt to data. +func (d *Data) SetStmt(s *statement) { + C.dpiData_setStmt(&d.dpiData, s.dpiStmt) +} + +// GetTime gets Time from data. +func (d *Data) GetTime() time.Time { + if d.IsNull() { + return time.Time{} + } + ts := C.dpiData_getTimestamp(&d.dpiData) + return time.Date( + int(ts.year), time.Month(ts.month), int(ts.day), + int(ts.hour), int(ts.minute), int(ts.second), int(ts.fsecond), + timeZoneFor(ts.tzHourOffset, ts.tzMinuteOffset), + ) + +} + +// SetTime sets Time to data. +func (d *Data) SetTime(t time.Time) { + _, z := t.Zone() + C.dpiData_setTimestamp(&d.dpiData, + C.int16_t(t.Year()), C.uint8_t(t.Month()), C.uint8_t(t.Day()), + C.uint8_t(t.Hour()), C.uint8_t(t.Minute()), C.uint8_t(t.Second()), C.uint32_t(t.Nanosecond()), + C.int8_t(z/3600), C.int8_t((z%3600)/60), + ) +} + +// GetUint64 gets data as uint64. +func (d *Data) GetUint64() uint64 { + if d.IsNull() { + return 0 + } + return uint64(C.dpiData_getUint64(&d.dpiData)) +} + +// SetUint64 sets data to uint64. +func (d *Data) SetUint64(u uint64) { + C.dpiData_setUint64(&d.dpiData, C.uint64_t(u)) +} + +// IntervalYM holds Years and Months as interval. +type IntervalYM struct { + Years, Months int +} + +// Get returns the contents of Data. +func (d *Data) Get() interface{} { + switch d.NativeTypeNum { + case C.DPI_NATIVE_TYPE_BOOLEAN: + return d.GetBool() + case C.DPI_NATIVE_TYPE_BYTES: + return d.GetBytes() + case C.DPI_NATIVE_TYPE_DOUBLE: + return d.GetFloat64() + case C.DPI_NATIVE_TYPE_FLOAT: + return d.GetFloat32() + case C.DPI_NATIVE_TYPE_INT64: + return d.GetInt64() + case C.DPI_NATIVE_TYPE_INTERVAL_DS: + return d.GetIntervalDS() + case C.DPI_NATIVE_TYPE_INTERVAL_YM: + return d.GetIntervalYM() + case C.DPI_NATIVE_TYPE_LOB: + return d.GetLob() + case C.DPI_NATIVE_TYPE_OBJECT: + return d.GetObject() + case C.DPI_NATIVE_TYPE_STMT: + return d.GetStmt() + case C.DPI_NATIVE_TYPE_TIMESTAMP: + return d.GetTime() + case C.DPI_NATIVE_TYPE_UINT64: + return d.GetUint64() + default: + panic(fmt.Sprintf("unknown NativeTypeNum=%d", d.NativeTypeNum)) + } +} + +// Set the data. +func (d *Data) Set(v interface{}) error { + if v == nil { + return errors.Errorf("%s: %w", "nil type", ErrNotSupported) + } + switch x := v.(type) { + case int32: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_INT64 + d.SetInt64(int64(x)) + case int64: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_INT64 + d.SetInt64(x) + case uint64: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_UINT64 + d.SetUint64(x) + case float32: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_FLOAT + d.SetFloat32(x) + case float64: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_DOUBLE + d.SetFloat64(x) + case string: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_BYTES + d.SetBytes([]byte(x)) + case []byte: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_BYTES + d.SetBytes(x) + case time.Time: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_TIMESTAMP + d.SetTime(x) + case time.Duration: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_INTERVAL_DS + d.SetIntervalDS(x) + case IntervalYM: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_INTERVAL_YM + d.SetIntervalYM(x) + case *DirectLob: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_LOB + d.SetLob(x) + case *Object: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_OBJECT + d.ObjectType = x.ObjectType + d.SetObject(x) + //case *stmt: + //d.NativeTypeNum = C.DPI_NATIVE_TYPE_STMT + //d.SetStmt(x) + case bool: + d.NativeTypeNum = C.DPI_NATIVE_TYPE_BOOLEAN + d.SetBool(x) + //case rowid: + //d.NativeTypeNum = C.DPI_NATIVE_TYPE_ROWID + //d.SetRowid(x) + default: + return errors.Errorf("%T: %w", ErrNotSupported, v) + } + return nil +} + +// IsObject returns whether the data contains an Object or not. +func (d *Data) IsObject() bool { + return d.NativeTypeNum == C.DPI_NATIVE_TYPE_OBJECT +} + +// NewData returns Data for input parameters on Object/ObjectCollection. +func (c *conn) NewData(baseType interface{}, sliceLen, bufSize int) ([]*Data, error) { + if c == nil || c.dpiConn == nil { + return nil, errors.New("connection is nil") + } + + vi, err := newVarInfo(baseType, sliceLen, bufSize) + if err != nil { + return nil, err + } + + v, dpiData, err := c.newVar(vi) + if err != nil { + return nil, err + } + defer C.dpiVar_release(v) + + data := make([]*Data, sliceLen) + for i := 0; i < sliceLen; i++ { + data[i] = &Data{dpiData: dpiData[i], NativeTypeNum: vi.NatTyp} + } + + return data, nil +} + +func newVarInfo(baseType interface{}, sliceLen, bufSize int) (varInfo, error) { + var vi varInfo + + switch v := baseType.(type) { + case Lob, []Lob: + vi.NatTyp = C.DPI_NATIVE_TYPE_LOB + var isClob bool + switch v := v.(type) { + case Lob: + isClob = v.IsClob + case []Lob: + isClob = len(v) > 0 && v[0].IsClob + } + if isClob { + vi.Typ = C.DPI_ORACLE_TYPE_CLOB + } else { + vi.Typ = C.DPI_ORACLE_TYPE_BLOB + } + case Number, []Number: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_NUMBER, C.DPI_NATIVE_TYPE_BYTES + case int, []int, int64, []int64, sql.NullInt64, []sql.NullInt64: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_NUMBER, C.DPI_NATIVE_TYPE_INT64 + case int32, []int32: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_NATIVE_INT, C.DPI_NATIVE_TYPE_INT64 + case uint, []uint, uint64, []uint64: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_NUMBER, C.DPI_NATIVE_TYPE_UINT64 + case uint32, []uint32: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_NATIVE_UINT, C.DPI_NATIVE_TYPE_UINT64 + case float32, []float32: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_NATIVE_FLOAT, C.DPI_NATIVE_TYPE_FLOAT + case float64, []float64, sql.NullFloat64, []sql.NullFloat64: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_NATIVE_DOUBLE, C.DPI_NATIVE_TYPE_DOUBLE + case bool, []bool: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_BOOLEAN, C.DPI_NATIVE_TYPE_BOOLEAN + case []byte, [][]byte: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_RAW, C.DPI_NATIVE_TYPE_BYTES + switch v := v.(type) { + case []byte: + bufSize = len(v) + case [][]byte: + for _, b := range v { + if n := len(b); n > bufSize { + bufSize = n + } + } + } + case string, []string, nil: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_VARCHAR, C.DPI_NATIVE_TYPE_BYTES + bufSize = 32767 + case time.Time, []time.Time: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_DATE, C.DPI_NATIVE_TYPE_TIMESTAMP + case userType, []userType: + vi.Typ, vi.NatTyp = C.DPI_ORACLE_TYPE_OBJECT, C.DPI_NATIVE_TYPE_OBJECT + switch v := v.(type) { + case userType: + vi.ObjectType = v.ObjectRef().ObjectType.dpiObjectType + case []userType: + if len(v) > 0 { + vi.ObjectType = v[0].ObjectRef().ObjectType.dpiObjectType + } + } + default: + return vi, errors.Errorf("unknown type %T", v) + } + + vi.IsPLSArray = reflect.TypeOf(baseType).Kind() == reflect.Slice + vi.SliceLen = sliceLen + vi.BufSize = bufSize + + return vi, nil +} + +func (d *Data) reset() { + d.NativeTypeNum = 0 + d.ObjectType = ObjectType{} + d.implicitObj = false + d.SetBytes(nil) + d.dpiData.isNull = 1 +} diff --git a/vendor/github.com/godror/godror/drv.go b/vendor/github.com/godror/godror/drv.go new file mode 100644 index 000000000000..5699088871e2 --- /dev/null +++ b/vendor/github.com/godror/godror/drv.go @@ -0,0 +1,963 @@ +// Copyright 2019 Tamás Gulácsi +// +// +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 + +// Package godror is a database/sql/driver for Oracle DB. +// +// The connection string for the sql.Open("godror", connString) call can be +// the simple +// login/password@sid [AS SYSDBA|AS SYSOPER] +// +// type (with sid being the sexp returned by tnsping), +// or in the form of +// ora://login:password@sid/? \ +// sysdba=0& \ +// sysoper=0& \ +// poolMinSessions=1& \ +// poolMaxSessions=1000& \ +// poolIncrement=1& \ +// connectionClass=POOLED& \ +// standaloneConnection=0& \ +// enableEvents=0& \ +// heterogeneousPool=0& \ +// prelim=0& \ +// poolWaitTimeout=5m& \ +// poolSessionMaxLifetime=1h& \ +// poolSessionTimeout=30s& \ +// timezone=Local& \ +// newPassword= \ +// onInit=ALTER+SESSION+SET+current_schema%3Dmy_schema +// +// These are the defaults. Many advocate that a static session pool (min=max, incr=0) +// is better, with 1-10 sessions per CPU thread. +// See http://docs.oracle.com/cd/E82638_01/JJUCP/optimizing-real-world-performance.htm#JJUCP-GUID-BC09F045-5D80-4AF5-93F5-FEF0531E0E1D +// You may also use ConnectionParams to configure a connection. +// +// If you specify connectionClass, that'll reuse the same session pool +// without the connectionClass, but will specify it on each session acquire. +// Thus you can cluster the session pool with classes, or use POOLED for DRCP. +// +// For what can be used as "sid", see https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/configuring-naming-methods.html#GUID-E5358DEA-D619-4B7B-A799-3D2F802500F1 +package godror + +/* +#cgo CFLAGS: -I./odpi/include -I./odpi/src -I./odpi/embed + +#include + +#include "dpi.c" +*/ +import "C" + +import ( + "context" + "database/sql" + "database/sql/driver" + "encoding/base64" + "fmt" + "hash/fnv" + "io" + "net/url" + "strconv" + "strings" + "sync" + "time" + "unsafe" + + errors "golang.org/x/xerrors" +) + +const ( + // DefaultFetchRowCount is the number of prefetched rows by default (if not changed through FetchRowCount statement option). + DefaultFetchRowCount = 1 << 8 + + // DefaultArraySize is the length of the maximum PL/SQL array by default (if not changed through ArraySize statement option). + DefaultArraySize = 1 << 10 +) + +const ( + // DpiMajorVersion is the wanted major version of the underlying ODPI-C library. + DpiMajorVersion = C.DPI_MAJOR_VERSION + // DpiMinorVersion is the wanted minor version of the underlying ODPI-C library. + DpiMinorVersion = C.DPI_MINOR_VERSION + // DpiPatchLevel is the patch level version of the underlying ODPI-C library + DpiPatchLevel = C.DPI_PATCH_LEVEL + // DpiVersionNumber is the underlying ODPI-C version as one number (Major * 10000 + Minor * 100 + Patch) + DpiVersionNumber = C.DPI_VERSION_NUMBER + + // DriverName is set on the connection to be seen in the DB + // + // It cannot be longer than 30 bytes ! + DriverName = "godror : " + Version + + // DefaultPoolMinSessions specifies the default value for minSessions for pool creation. + DefaultPoolMinSessions = 1 + // DefaultPoolMaxSessions specifies the default value for maxSessions for pool creation. + DefaultPoolMaxSessions = 1000 + // DefaultPoolIncrement specifies the default value for increment for pool creation. + DefaultPoolIncrement = 1 + // DefaultConnectionClass is the default connectionClass + DefaultConnectionClass = "GODROR" + // NoConnectionPoolingConnectionClass is a special connection class name to indicate no connection pooling. + // It is the same as setting standaloneConnection=1 + NoConnectionPoolingConnectionClass = "NO-CONNECTION-POOLING" + // DefaultSessionTimeout is the seconds before idle pool sessions get evicted + DefaultSessionTimeout = 5 * time.Minute + // DefaultWaitTimeout is the milliseconds to wait for a session to become available + DefaultWaitTimeout = 30 * time.Second + // DefaultMaxLifeTime is the maximum time in seconds till a pooled session may exist + DefaultMaxLifeTime = 1 * time.Hour +) + +// Log function. By default, it's nil, and thus logs nothing. +// If you want to change this, change it to a github.com/go-kit/kit/log.Swapper.Log +// or analog to be race-free. +var Log func(...interface{}) error + +var defaultDrv = &drv{} + +func init() { + sql.Register("godror", defaultDrv) +} + +var _ = driver.Driver((*drv)(nil)) + +type drv struct { + mu sync.Mutex + dpiContext *C.dpiContext + pools map[string]*connPool + clientVersion VersionInfo +} + +type connPool struct { + dpiPool *C.dpiPool + timeZone *time.Location + tzOffSecs int + serverVersion VersionInfo +} + +func (d *drv) init() error { + d.mu.Lock() + defer d.mu.Unlock() + if d.pools == nil { + d.pools = make(map[string]*connPool) + } + if d.dpiContext != nil { + return nil + } + var errInfo C.dpiErrorInfo + var dpiCtx *C.dpiContext + if C.dpiContext_create(C.uint(DpiMajorVersion), C.uint(DpiMinorVersion), + (**C.dpiContext)(unsafe.Pointer(&dpiCtx)), &errInfo, + ) == C.DPI_FAILURE { + return fromErrorInfo(errInfo) + } + d.dpiContext = dpiCtx + + var v C.dpiVersionInfo + if C.dpiContext_getClientVersion(d.dpiContext, &v) == C.DPI_FAILURE { + return errors.Errorf("%s: %w", "getClientVersion", d.getError()) + } + d.clientVersion.set(&v) + return nil +} + +// Open returns a new connection to the database. +// The name is a string in a driver-specific format. +func (d *drv) Open(connString string) (driver.Conn, error) { + P, err := ParseConnString(connString) + if err != nil { + return nil, err + } + + conn, err := d.openConn(P) + return conn, maybeBadConn(err, conn) +} + +func (d *drv) ClientVersion() (VersionInfo, error) { + return d.clientVersion, nil +} + +var cUTF8, cDriverName = C.CString("AL32UTF8"), C.CString(DriverName) + +func (d *drv) openConn(P ConnectionParams) (*conn, error) { + if err := d.init(); err != nil { + return nil, err + } + + P.Comb() + c := &conn{drv: d, connParams: P, timeZone: time.Local, Client: d.clientVersion} + connString := P.String() + + if Log != nil { + defer func() { + d.mu.Lock() + Log("pools", d.pools, "conn", P.String(), "drv", fmt.Sprintf("%p", d)) + d.mu.Unlock() + }() + } + + if !(P.IsSysDBA || P.IsSysOper || P.IsSysASM || P.IsPrelim || P.StandaloneConnection) { + d.mu.Lock() + dp := d.pools[connString] + d.mu.Unlock() + if dp != nil { + //Proxy authenticated connections to database will be provided by methods with context + err := dp.acquireConn(c, P) + return c, err + } + } + + extAuth := C.int(b2i(P.Username == "" && P.Password == "")) + var cUserName, cPassword, cNewPassword, cConnClass *C.char + if !(P.Username == "" && P.Password == "") { + cUserName, cPassword = C.CString(P.Username), C.CString(P.Password) + } + var cSid *C.char + if P.SID != "" { + cSid = C.CString(P.SID) + } + defer func() { + if cUserName != nil { + C.free(unsafe.Pointer(cUserName)) + C.free(unsafe.Pointer(cPassword)) + } + if cNewPassword != nil { + C.free(unsafe.Pointer(cNewPassword)) + } + if cSid != nil { + C.free(unsafe.Pointer(cSid)) + } + if cConnClass != nil { + C.free(unsafe.Pointer(cConnClass)) + } + }() + var commonCreateParams C.dpiCommonCreateParams + if C.dpiContext_initCommonCreateParams(d.dpiContext, &commonCreateParams) == C.DPI_FAILURE { + return nil, errors.Errorf("initCommonCreateParams: %w", d.getError()) + } + commonCreateParams.createMode = C.DPI_MODE_CREATE_DEFAULT | C.DPI_MODE_CREATE_THREADED + if P.EnableEvents { + commonCreateParams.createMode |= C.DPI_MODE_CREATE_EVENTS + } + commonCreateParams.encoding = cUTF8 + commonCreateParams.nencoding = cUTF8 + commonCreateParams.driverName = cDriverName + commonCreateParams.driverNameLength = C.uint32_t(len(DriverName)) + + if P.IsSysDBA || P.IsSysOper || P.IsSysASM || P.IsPrelim || P.StandaloneConnection { + // no pool + c.connParams = P + return c, c.acquireConn(P.Username, P.Password, P.ConnClass) + } + var poolCreateParams C.dpiPoolCreateParams + if C.dpiContext_initPoolCreateParams(d.dpiContext, &poolCreateParams) == C.DPI_FAILURE { + return nil, errors.Errorf("initPoolCreateParams: %w", d.getError()) + } + poolCreateParams.minSessions = DefaultPoolMinSessions + if P.MinSessions >= 0 { + poolCreateParams.minSessions = C.uint32_t(P.MinSessions) + } + poolCreateParams.maxSessions = DefaultPoolMaxSessions + if P.MaxSessions > 0 { + poolCreateParams.maxSessions = C.uint32_t(P.MaxSessions) + } + poolCreateParams.sessionIncrement = DefaultPoolIncrement + if P.PoolIncrement > 0 { + poolCreateParams.sessionIncrement = C.uint32_t(P.PoolIncrement) + } + if extAuth == 1 || P.HeterogeneousPool { + poolCreateParams.homogeneous = 0 + } + poolCreateParams.externalAuth = extAuth + poolCreateParams.getMode = C.DPI_MODE_POOL_GET_TIMEDWAIT + poolCreateParams.timeout = C.uint32_t(DefaultSessionTimeout / time.Second) + if P.SessionTimeout > time.Second { + poolCreateParams.timeout = C.uint32_t(P.SessionTimeout / time.Second) // seconds before idle pool sessions get evicted + } + poolCreateParams.waitTimeout = C.uint32_t(DefaultWaitTimeout / time.Millisecond) + if P.WaitTimeout > time.Millisecond { + poolCreateParams.waitTimeout = C.uint32_t(P.WaitTimeout / time.Millisecond) // milliseconds to wait for a session to become available + } + poolCreateParams.maxLifetimeSession = C.uint32_t(DefaultMaxLifeTime / time.Second) + if P.MaxLifeTime > 0 { + poolCreateParams.maxLifetimeSession = C.uint32_t(P.MaxLifeTime / time.Second) // maximum time in seconds till a pooled session may exist + } + + var dp *C.dpiPool + if Log != nil { + Log("C", "dpiPool_create", "username", P.Username, "conn", connString, "sid", P.SID, "common", commonCreateParams, "pool", fmt.Sprintf("%#v", poolCreateParams)) + } + if C.dpiPool_create( + d.dpiContext, + cUserName, C.uint32_t(len(P.Username)), + cPassword, C.uint32_t(len(P.Password)), + cSid, C.uint32_t(len(P.SID)), + &commonCreateParams, + &poolCreateParams, + (**C.dpiPool)(unsafe.Pointer(&dp)), + ) == C.DPI_FAILURE { + return nil, errors.Errorf("params=%s extAuth=%v: %w", P.String(), extAuth, d.getError()) + } + C.dpiPool_setStmtCacheSize(dp, 40) + pool := &connPool{dpiPool: dp} + d.mu.Lock() + d.pools[connString] = pool + d.mu.Unlock() + + return c, pool.acquireConn(c, P) +} + +func (dp *connPool) acquireConn(c *conn, P ConnectionParams) error { + P.Comb() + c.mu.Lock() + c.connParams = P + c.Client, c.Server = c.drv.clientVersion, dp.serverVersion + c.timeZone, c.tzOffSecs = dp.timeZone, dp.tzOffSecs + c.mu.Unlock() + + var connCreateParams C.dpiConnCreateParams + if C.dpiContext_initConnCreateParams(c.drv.dpiContext, &connCreateParams) == C.DPI_FAILURE { + return errors.Errorf("initConnCreateParams: %w", c.drv.getError()) + } + if P.ConnClass != "" { + cConnClass := C.CString(P.ConnClass) + defer C.free(unsafe.Pointer(cConnClass)) + connCreateParams.connectionClass = cConnClass + connCreateParams.connectionClassLength = C.uint32_t(len(P.ConnClass)) + } + dc := C.malloc(C.sizeof_void) + if C.dpiPool_acquireConnection( + dp.dpiPool, + nil, 0, nil, 0, + &connCreateParams, + (**C.dpiConn)(unsafe.Pointer(&dc)), + ) == C.DPI_FAILURE { + C.free(unsafe.Pointer(dc)) + return errors.Errorf("acquirePoolConnection(user=%q, params=%#v): %w", P.Username, connCreateParams, c.getError()) + } + + c.mu.Lock() + c.dpiConn = (*C.dpiConn)(dc) + c.currentUser = P.Username + c.newSession = connCreateParams.outNewSession == 1 + c.mu.Unlock() + err := c.init(P.OnInit) + if err == nil { + c.mu.Lock() + dp.serverVersion = c.Server + dp.timeZone, dp.tzOffSecs = c.timeZone, c.tzOffSecs + c.mu.Unlock() + } + + return err +} + +func (c *conn) acquireConn(user, pass, connClass string) error { + P := c.connParams + if !(P.IsSysDBA || P.IsSysOper || P.IsSysASM || P.IsPrelim || P.StandaloneConnection) { + c.drv.mu.Lock() + pool := c.drv.pools[P.String()] + if Log != nil { + Log("pools", c.drv.pools, "drv", fmt.Sprintf("%p", c.drv)) + } + c.drv.mu.Unlock() + if pool != nil { + P.Username, P.Password, P.ConnClass = user, pass, connClass + return pool.acquireConn(c, P) + } + } + + var connCreateParams C.dpiConnCreateParams + if C.dpiContext_initConnCreateParams(c.drv.dpiContext, &connCreateParams) == C.DPI_FAILURE { + return errors.Errorf("initConnCreateParams: %w", c.drv.getError()) + } + var cUserName, cPassword, cNewPassword, cConnClass, cSid *C.char + defer func() { + if cUserName != nil { + C.free(unsafe.Pointer(cUserName)) + } + if cPassword != nil { + C.free(unsafe.Pointer(cPassword)) + } + if cNewPassword != nil { + C.free(unsafe.Pointer(cNewPassword)) + } + if cConnClass != nil { + C.free(unsafe.Pointer(cConnClass)) + } + if cSid != nil { + C.free(unsafe.Pointer(cSid)) + } + }() + if user != "" { + cUserName = C.CString(user) + } + if pass != "" { + cPassword = C.CString(pass) + } + if connClass != "" { + cConnClass = C.CString(connClass) + connCreateParams.connectionClass = cConnClass + connCreateParams.connectionClassLength = C.uint32_t(len(connClass)) + } + var commonCreateParams C.dpiCommonCreateParams + if C.dpiContext_initCommonCreateParams(c.drv.dpiContext, &commonCreateParams) == C.DPI_FAILURE { + return errors.Errorf("initCommonCreateParams: %w", c.drv.getError()) + } + commonCreateParams.createMode = C.DPI_MODE_CREATE_DEFAULT | C.DPI_MODE_CREATE_THREADED + if P.EnableEvents { + commonCreateParams.createMode |= C.DPI_MODE_CREATE_EVENTS + } + commonCreateParams.encoding = cUTF8 + commonCreateParams.nencoding = cUTF8 + commonCreateParams.driverName = cDriverName + commonCreateParams.driverNameLength = C.uint32_t(len(DriverName)) + + if P.SID != "" { + cSid = C.CString(P.SID) + } + connCreateParams.authMode = P.authMode() + extAuth := C.int(b2i(user == "" && pass == "")) + connCreateParams.externalAuth = extAuth + if P.NewPassword != "" { + cNewPassword = C.CString(P.NewPassword) + connCreateParams.newPassword = cNewPassword + connCreateParams.newPasswordLength = C.uint32_t(len(P.NewPassword)) + } + if Log != nil { + Log("C", "dpiConn_create", "params", P.String(), "common", commonCreateParams, "conn", connCreateParams) + } + dc := C.malloc(C.sizeof_void) + if C.dpiConn_create( + c.drv.dpiContext, + cUserName, C.uint32_t(len(user)), + cPassword, C.uint32_t(len(pass)), + cSid, C.uint32_t(len(P.SID)), + &commonCreateParams, + &connCreateParams, + (**C.dpiConn)(unsafe.Pointer(&dc)), + ) == C.DPI_FAILURE { + C.free(unsafe.Pointer(dc)) + return errors.Errorf("username=%q sid=%q params=%+v: %w", user, P.SID, connCreateParams, c.drv.getError()) + } + c.mu.Lock() + c.dpiConn = (*C.dpiConn)(dc) + c.currentUser = user + c.newSession = true + P.Username, P.Password, P.ConnClass = user, pass, connClass + if P.NewPassword != "" { + P.Password, P.NewPassword = P.NewPassword, "" + } + c.connParams = P + c.mu.Unlock() + return c.init(P.OnInit) +} + +// ConnectionParams holds the params for a connection (pool). +// You can use ConnectionParams{...}.StringWithPassword() +// as a connection string in sql.Open. +type ConnectionParams struct { + OnInit []string + Username, Password, SID, ConnClass string + // NewPassword is used iff StandaloneConnection is true! + NewPassword string + MinSessions, MaxSessions, PoolIncrement int + WaitTimeout, MaxLifeTime, SessionTimeout time.Duration + Timezone *time.Location + IsSysDBA, IsSysOper, IsSysASM, IsPrelim bool + HeterogeneousPool bool + StandaloneConnection bool + EnableEvents bool +} + +// String returns the string representation of ConnectionParams. +// The password is replaced with a "SECRET" string! +func (P ConnectionParams) String() string { + return P.string(true, false) +} + +// StringNoClass returns the string representation of ConnectionParams, without class info. +// The password is replaced with a "SECRET" string! +func (P ConnectionParams) StringNoClass() string { + return P.string(false, false) +} + +// StringWithPassword returns the string representation of ConnectionParams (as String() does), +// but does NOT obfuscate the password, just prints it as is. +func (P ConnectionParams) StringWithPassword() string { + return P.string(true, true) +} + +func (P ConnectionParams) string(class, withPassword bool) string { + host, path := P.SID, "" + if i := strings.IndexByte(host, '/'); i >= 0 { + host, path = host[:i], host[i:] + } + q := make(url.Values, 32) + s := P.ConnClass + if !class { + s = "" + } + q.Add("connectionClass", s) + + password := P.Password + if withPassword { + q.Add("newPassword", P.NewPassword) + } else { + hsh := fnv.New64() + io.WriteString(hsh, P.Password) + password = "SECRET-" + base64.URLEncoding.EncodeToString(hsh.Sum(nil)) + if P.NewPassword != "" { + hsh.Reset() + io.WriteString(hsh, P.NewPassword) + q.Add("newPassword", "SECRET-"+base64.URLEncoding.EncodeToString(hsh.Sum(nil))) + } + } + s = "" + if P.Timezone != nil { + s = P.Timezone.String() + } + q.Add("timezone", s) + B := func(b bool) string { + if b { + return "1" + } + return "0" + } + q.Add("poolMinSessions", strconv.Itoa(P.MinSessions)) + q.Add("poolMaxSessions", strconv.Itoa(P.MaxSessions)) + q.Add("poolIncrement", strconv.Itoa(P.PoolIncrement)) + q.Add("sysdba", B(P.IsSysDBA)) + q.Add("sysoper", B(P.IsSysOper)) + q.Add("sysasm", B(P.IsSysASM)) + q.Add("standaloneConnection", B(P.StandaloneConnection)) + q.Add("enableEvents", B(P.EnableEvents)) + q.Add("heterogeneousPool", B(P.HeterogeneousPool)) + q.Add("prelim", B(P.IsPrelim)) + q.Add("poolWaitTimeout", P.WaitTimeout.String()) + q.Add("poolSessionMaxLifetime", P.MaxLifeTime.String()) + q.Add("poolSessionTimeout", P.SessionTimeout.String()) + q["onInit"] = P.OnInit + return (&url.URL{ + Scheme: "oracle", + User: url.UserPassword(P.Username, password), + Host: host, + Path: path, + RawQuery: q.Encode(), + }).String() +} + +func (P *ConnectionParams) Comb() { + P.StandaloneConnection = P.StandaloneConnection || P.ConnClass == NoConnectionPoolingConnectionClass + if P.IsPrelim || P.StandaloneConnection { + // Prelim: the shared memory may not exist when Oracle is shut down. + P.ConnClass = "" + P.HeterogeneousPool = false + } +} + +// ParseConnString parses the given connection string into a struct. +func ParseConnString(connString string) (ConnectionParams, error) { + P := ConnectionParams{ + MinSessions: DefaultPoolMinSessions, + MaxSessions: DefaultPoolMaxSessions, + PoolIncrement: DefaultPoolIncrement, + ConnClass: DefaultConnectionClass, + MaxLifeTime: DefaultMaxLifeTime, + WaitTimeout: DefaultWaitTimeout, + SessionTimeout: DefaultSessionTimeout, + } + if !strings.HasPrefix(connString, "oracle://") { + i := strings.IndexByte(connString, '/') + if i < 0 { + return P, errors.New("no '/' in connection string") + } + P.Username, connString = connString[:i], connString[i+1:] + + uSid := strings.ToUpper(connString) + //fmt.Printf("connString=%q SID=%q\n", connString, uSid) + if strings.Contains(uSid, " AS ") { + if P.IsSysDBA = strings.HasSuffix(uSid, " AS SYSDBA"); P.IsSysDBA { + connString = connString[:len(connString)-10] + } else if P.IsSysOper = strings.HasSuffix(uSid, " AS SYSOPER"); P.IsSysOper { + connString = connString[:len(connString)-11] + } else if P.IsSysASM = strings.HasSuffix(uSid, " AS SYSASM"); P.IsSysASM { + connString = connString[:len(connString)-10] + } + } + if i = strings.IndexByte(connString, '@'); i >= 0 { + P.Password, P.SID = connString[:i], connString[i+1:] + } else { + P.Password = connString + } + if strings.HasSuffix(P.SID, ":POOLED") { + P.ConnClass, P.SID = "POOLED", P.SID[:len(P.SID)-7] + } + //fmt.Printf("connString=%q params=%s\n", connString, P) + return P, nil + } + u, err := url.Parse(connString) + if err != nil { + return P, errors.Errorf("%s: %w", connString, err) + } + if usr := u.User; usr != nil { + P.Username = usr.Username() + P.Password, _ = usr.Password() + } + P.SID = u.Hostname() + // IPv6 literal address brackets are removed by u.Hostname, + // so we have to put them back + if strings.HasPrefix(u.Host, "[") && !strings.Contains(P.SID[1:], "]") { + P.SID = "[" + P.SID + "]" + } + if u.Port() != "" { + P.SID += ":" + u.Port() + } + if u.Path != "" && u.Path != "/" { + P.SID += u.Path + } + q := u.Query() + if vv, ok := q["connectionClass"]; ok { + P.ConnClass = vv[0] + } + for _, task := range []struct { + Dest *bool + Key string + }{ + {&P.IsSysDBA, "sysdba"}, + {&P.IsSysOper, "sysoper"}, + {&P.IsSysASM, "sysasm"}, + {&P.IsPrelim, "prelim"}, + + {&P.StandaloneConnection, "standaloneConnection"}, + {&P.EnableEvents, "enableEvents"}, + {&P.HeterogeneousPool, "heterogeneousPool"}, + } { + *task.Dest = q.Get(task.Key) == "1" + } + if tz := q.Get("timezone"); tz != "" { + if tz == "local" { + P.Timezone = time.Local + } else if strings.Contains(tz, "/") { + if P.Timezone, err = time.LoadLocation(tz); err != nil { + return P, errors.Errorf("%s: %w", tz, err) + } + } else if off, err := parseTZ(tz); err == nil { + P.Timezone = time.FixedZone(tz, off) + } else { + return P, errors.Errorf("%s: %w", tz, err) + } + } + + for _, task := range []struct { + Dest *int + Key string + }{ + {&P.MinSessions, "poolMinSessions"}, + {&P.MaxSessions, "poolMaxSessions"}, + {&P.PoolIncrement, "poolIncrement"}, + } { + s := q.Get(task.Key) + if s == "" { + continue + } + var err error + *task.Dest, err = strconv.Atoi(s) + if err != nil { + return P, errors.Errorf("%s: %w", task.Key+"="+s, err) + } + } + for _, task := range []struct { + Dest *time.Duration + Key string + }{ + {&P.SessionTimeout, "poolSessionTimeout"}, + {&P.WaitTimeout, "poolWaitTimeout"}, + {&P.MaxLifeTime, "poolSessionMaxLifetime"}, + } { + s := q.Get(task.Key) + if s == "" { + continue + } + var err error + *task.Dest, err = time.ParseDuration(s) + if err != nil { + if !strings.Contains(err.Error(), "time: missing unit in duration") { + return P, errors.Errorf("%s: %w", task.Key+"="+s, err) + } + i, err := strconv.Atoi(s) + if err != nil { + return P, errors.Errorf("%s: %w", task.Key+"="+s, err) + } + base := time.Second + if task.Key == "poolWaitTimeout" { + base = time.Millisecond + } + *task.Dest = time.Duration(i) * base + } + } + if P.MinSessions > P.MaxSessions { + P.MinSessions = P.MaxSessions + } + if P.MinSessions == P.MaxSessions { + P.PoolIncrement = 0 + } else if P.PoolIncrement < 1 { + P.PoolIncrement = 1 + } + P.OnInit = q["onInit"] + + P.Comb() + if P.StandaloneConnection { + P.NewPassword = q.Get("newPassword") + } + + return P, nil +} + +// SetSessionParamOnInit adds an "ALTER SESSION k=v" to the OnInit task list. +func (P *ConnectionParams) SetSessionParamOnInit(k, v string) { + P.OnInit = append(P.OnInit, fmt.Sprintf("ALTER SESSION SET %s = q'(%s)'", k, strings.Replace(v, "'", "''", -1))) +} + +func (P ConnectionParams) authMode() C.dpiAuthMode { + authMode := C.dpiAuthMode(C.DPI_MODE_AUTH_DEFAULT) + // OR all the modes together + for _, elt := range []struct { + Is bool + Mode C.dpiAuthMode + }{ + {P.IsSysDBA, C.DPI_MODE_AUTH_SYSDBA}, + {P.IsSysOper, C.DPI_MODE_AUTH_SYSOPER}, + {P.IsSysASM, C.DPI_MODE_AUTH_SYSASM}, + {P.IsPrelim, C.DPI_MODE_AUTH_PRELIM}, + } { + if elt.Is { + authMode |= elt.Mode + } + } + return authMode +} + +// OraErr is an error holding the ORA-01234 code and the message. +type OraErr struct { + message string + code int +} + +// AsOraErr returns the underlying *OraErr and whether it succeeded. +func AsOraErr(err error) (*OraErr, bool) { + var oerr *OraErr + ok := errors.As(err, &oerr) + return oerr, ok +} + +var _ = error((*OraErr)(nil)) + +// Code returns the OraErr's error code. +func (oe *OraErr) Code() int { return oe.code } + +// Message returns the OraErr's message. +func (oe *OraErr) Message() string { return oe.message } +func (oe *OraErr) Error() string { + msg := oe.Message() + if oe.code == 0 && msg == "" { + return "" + } + return fmt.Sprintf("ORA-%05d: %s", oe.code, oe.message) +} +func fromErrorInfo(errInfo C.dpiErrorInfo) *OraErr { + oe := OraErr{ + code: int(errInfo.code), + message: strings.TrimSpace(C.GoString(errInfo.message)), + } + if oe.code == 0 && strings.HasPrefix(oe.message, "ORA-") && + len(oe.message) > 9 && oe.message[9] == ':' { + if i, _ := strconv.Atoi(oe.message[4:9]); i > 0 { + oe.code = i + } + } + oe.message = strings.TrimPrefix(oe.message, fmt.Sprintf("ORA-%05d: ", oe.Code())) + return &oe +} + +// newErrorInfo is just for testing: testing cannot use Cgo... +func newErrorInfo(code int, message string) C.dpiErrorInfo { + return C.dpiErrorInfo{code: C.int32_t(code), message: C.CString(message)} +} + +// against deadcode +var _ = newErrorInfo + +func (d *drv) getError() *OraErr { + if d == nil || d.dpiContext == nil { + return &OraErr{code: -12153, message: driver.ErrBadConn.Error()} + } + var errInfo C.dpiErrorInfo + C.dpiContext_getError(d.dpiContext, &errInfo) + return fromErrorInfo(errInfo) +} + +func b2i(b bool) uint8 { + if b { + return 1 + } + return 0 +} + +// VersionInfo holds version info returned by Oracle DB. +type VersionInfo struct { + ServerRelease string + Version, Release, Update, PortRelease, PortUpdate, Full uint8 +} + +func (V *VersionInfo) set(v *C.dpiVersionInfo) { + *V = VersionInfo{ + Version: uint8(v.versionNum), + Release: uint8(v.releaseNum), Update: uint8(v.updateNum), + PortRelease: uint8(v.portReleaseNum), PortUpdate: uint8(v.portUpdateNum), + Full: uint8(v.fullVersionNum), + } +} +func (V VersionInfo) String() string { + var s string + if V.ServerRelease != "" { + s = " [" + V.ServerRelease + "]" + } + return fmt.Sprintf("%d.%d.%d.%d.%d%s", V.Version, V.Release, V.Update, V.PortRelease, V.PortUpdate, s) +} + +var timezones = make(map[[2]C.int8_t]*time.Location) +var timezonesMu sync.RWMutex + +func timeZoneFor(hourOffset, minuteOffset C.int8_t) *time.Location { + if hourOffset == 0 && minuteOffset == 0 { + return time.UTC + } + key := [2]C.int8_t{hourOffset, minuteOffset} + timezonesMu.RLock() + tz := timezones[key] + timezonesMu.RUnlock() + if tz == nil { + timezonesMu.Lock() + if tz = timezones[key]; tz == nil { + tz = time.FixedZone( + fmt.Sprintf("%02d:%02d", hourOffset, minuteOffset), + int(hourOffset)*3600+int(minuteOffset)*60, + ) + timezones[key] = tz + } + timezonesMu.Unlock() + } + return tz +} + +type ctxKey string + +const logCtxKey = ctxKey("godror.Log") + +type logFunc func(...interface{}) error + +func ctxGetLog(ctx context.Context) logFunc { + if lgr, ok := ctx.Value(logCtxKey).(func(...interface{}) error); ok { + return lgr + } + return Log +} + +// ContextWithLog returns a context with the given log function. +func ContextWithLog(ctx context.Context, logF func(...interface{}) error) context.Context { + return context.WithValue(ctx, logCtxKey, logF) +} + +var _ = driver.DriverContext((*drv)(nil)) +var _ = driver.Connector((*connector)(nil)) + +type connector struct { + drv *drv + onInit func(driver.Conn) error + ConnectionParams +} + +// OpenConnector must parse the name in the same format that Driver.Open +// parses the name parameter. +func (d *drv) OpenConnector(name string) (driver.Connector, error) { + P, err := ParseConnString(name) + if err != nil { + return nil, err + } + + return connector{ConnectionParams: P, drv: d}, nil +} + +// Connect returns a connection to the database. +// Connect may return a cached connection (one previously +// closed), but doing so is unnecessary; the sql package +// maintains a pool of idle connections for efficient re-use. +// +// The provided context.Context is for dialing purposes only +// (see net.DialContext) and should not be stored or used for +// other purposes. +// +// The returned connection is only used by one goroutine at a +// time. +func (c connector) Connect(context.Context) (driver.Conn, error) { + conn, err := c.drv.openConn(c.ConnectionParams) + if err != nil || c.onInit == nil || !conn.newSession { + return conn, err + } + if err = c.onInit(conn); err != nil { + conn.close(true) + return nil, err + } + return conn, nil +} + +// Driver returns the underlying Driver of the Connector, +// mainly to maintain compatibility with the Driver method +// on sql.DB. +func (c connector) Driver() driver.Driver { return c.drv } + +// NewConnector returns a driver.Connector to be used with sql.OpenDB, +// which calls the given onInit if the connection is new. +// +// For an onInit example, see NewSessionIniter. +func (d *drv) NewConnector(name string, onInit func(driver.Conn) error) (driver.Connector, error) { + cxr, err := d.OpenConnector(name) + if err != nil { + return nil, err + } + cx := cxr.(connector) + cx.onInit = onInit + return cx, err +} + +// NewConnector returns a driver.Connector to be used with sql.OpenDB, +// (for the default Driver registered with godror) +// which calls the given onInit if the connection is new. +// +// For an onInit example, see NewSessionIniter. +func NewConnector(name string, onInit func(driver.Conn) error) (driver.Connector, error) { + return defaultDrv.NewConnector(name, onInit) +} + +// NewSessionIniter returns a function suitable for use in NewConnector as onInit, +// which calls "ALTER SESSION SET =''" for each element of the given map. +func NewSessionIniter(m map[string]string) func(driver.Conn) error { + return func(cx driver.Conn) error { + for k, v := range m { + qry := fmt.Sprintf("ALTER SESSION SET %s = q'(%s)'", k, strings.Replace(v, "'", "''", -1)) + st, err := cx.Prepare(qry) + if err != nil { + return errors.Errorf("%s: %w", qry, err) + } + _, err = st.Exec(nil) //lint:ignore SA1019 it's hard to use ExecContext here + st.Close() + if err != nil { + return err + } + } + return nil + } +} diff --git a/vendor/github.com/godror/godror/drv_posix.go b/vendor/github.com/godror/godror/drv_posix.go new file mode 100644 index 000000000000..c88fa6ec60c3 --- /dev/null +++ b/vendor/github.com/godror/godror/drv_posix.go @@ -0,0 +1,11 @@ +// +build !windows + +// Copyright 2017 Tamás Gulácsi +// +// +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 + +package godror + +// #cgo LDFLAGS: -ldl -lpthread +import "C" diff --git a/vendor/github.com/godror/godror/go.mod b/vendor/github.com/godror/godror/go.mod new file mode 100644 index 000000000000..86f145abad2d --- /dev/null +++ b/vendor/github.com/godror/godror/go.mod @@ -0,0 +1,12 @@ +module github.com/godror/godror + +go 1.12 + +require ( + github.com/go-kit/kit v0.9.0 + github.com/go-logfmt/logfmt v0.4.0 + github.com/go-stack/stack v1.8.0 // indirect + github.com/google/go-cmp v0.3.1 + golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e + golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 +) diff --git a/vendor/github.com/godror/godror/go.sum b/vendor/github.com/godror/godror/go.sum new file mode 100644 index 000000000000..2456fcd618c5 --- /dev/null +++ b/vendor/github.com/godror/godror/go.sum @@ -0,0 +1,14 @@ +github.com/go-kit/kit v0.9.0 h1:wDJmvq38kDhkVxi50ni9ykkdUr1PKgqKOoi01fa0Mdk= +github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80nA= +github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= +github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk= +github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= +github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 h1:T+h1c/A9Gawja4Y9mFVWj2vyii2bbUNDw3kt9VxK2EY= +github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/vendor/gopkg.in/goracle.v2/lob.go b/vendor/github.com/godror/godror/lob.go similarity index 77% rename from vendor/gopkg.in/goracle.v2/lob.go rename to vendor/github.com/godror/godror/lob.go index 567dbaa843df..a914462d4645 100644 --- a/vendor/gopkg.in/goracle.v2/lob.go +++ b/vendor/github.com/godror/godror/lob.go @@ -1,19 +1,9 @@ // Copyright 2017 Tamás Gulácsi // // -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -package goracle +package godror /* #include "dpiImpl.h" @@ -25,7 +15,7 @@ import ( "unicode/utf8" "unsafe" - "github.com/pkg/errors" + errors "golang.org/x/xerrors" ) // Lob is for reading/writing a LOB. @@ -98,7 +88,9 @@ func (dlr *dpiLobReader) Read(p []byte) (int, error) { if dlr.sizePlusOne == 0 { // never read size before if C.dpiLob_getSize(dlr.dpiLob, &dlr.sizePlusOne) == C.DPI_FAILURE { - return 0, errors.Wrap(dlr.getError(), "getSize") + C.dpiLob_close(dlr.dpiLob) + dlr.dpiLob = nil + return 0, errors.Errorf("getSize: %w", dlr.getError()) } dlr.sizePlusOne++ } @@ -108,12 +100,14 @@ func (dlr *dpiLobReader) Read(p []byte) (int, error) { return 0, io.EOF } if C.dpiLob_readBytes(dlr.dpiLob, dlr.offset+1, n, (*C.char)(unsafe.Pointer(&p[0])), &n) == C.DPI_FAILURE { + C.dpiLob_close(dlr.dpiLob) + dlr.dpiLob = nil err := dlr.getError() if dlr.finished = err.(interface{ Code() int }).Code() == 1403; dlr.finished { dlr.offset += n return int(n), io.EOF } - return int(n), errors.Wrapf(err, "lob=%p offset=%d n=%d", dlr.dpiLob, dlr.offset, len(p)) + return int(n), errors.Errorf("lob=%p offset=%d n=%d: %w", dlr.dpiLob, dlr.offset, len(p), err) } //fmt.Printf("read %d\n", n) if dlr.IsClob { @@ -123,6 +117,9 @@ func (dlr *dpiLobReader) Read(p []byte) (int, error) { } var err error if n == 0 || dlr.offset+1 >= dlr.sizePlusOne { + C.dpiLob_close(dlr.dpiLob) + dlr.dpiLob = nil + dlr.finished = true err = io.EOF } return int(n), err @@ -141,14 +138,14 @@ func (dlw *dpiLobWriter) Write(p []byte) (int, error) { if !dlw.opened { //fmt.Printf("open %p\n", lob) if C.dpiLob_openResource(lob) == C.DPI_FAILURE { - return 0, errors.Wrapf(dlw.getError(), "openResources(%p)", lob) + return 0, errors.Errorf("openResources(%p): %w", lob, dlw.getError()) } dlw.opened = true } n := C.uint64_t(len(p)) if C.dpiLob_writeBytes(lob, dlw.offset+1, (*C.char)(unsafe.Pointer(&p[0])), n) == C.DPI_FAILURE { - err := errors.Wrapf(dlw.getError(), "writeBytes(%p, offset=%d, data=%d)", lob, dlw.offset, n) + err := errors.Errorf("writeBytes(%p, offset=%d, data=%d): %w", lob, dlw.offset, n, dlw.getError()) dlw.dpiLob = nil C.dpiLob_closeResource(lob) return 0, err @@ -171,7 +168,7 @@ func (dlw *dpiLobWriter) Close() error { if ec, ok := err.(interface{ Code() int }); ok && !dlw.opened && ec.Code() == 22289 { // cannot perform %s operation on an unopened file or LOB return nil } - return errors.Wrapf(err, "closeResource(%p)", lob) + return errors.Errorf("closeResource(%p): %w", lob, err) } return nil } @@ -186,6 +183,19 @@ type DirectLob struct { var _ = io.ReaderAt((*DirectLob)(nil)) var _ = io.WriterAt((*DirectLob)(nil)) +// NewTempLob returns a temporary LOB as DirectLob. +func (c *conn) NewTempLob(isClob bool) (*DirectLob, error) { + typ := C.uint(C.DPI_ORACLE_TYPE_BLOB) + if isClob { + typ = C.DPI_ORACLE_TYPE_CLOB + } + lob := DirectLob{conn: c} + if C.dpiConn_newTempLob(c.dpiConn, typ, &lob.dpiLob) == C.DPI_FAILURE { + return nil, errors.Errorf("newTempLob: %w", c.getError()) + } + return &lob, nil +} + // Close the Lob. func (dl *DirectLob) Close() error { if !dl.opened { @@ -193,7 +203,7 @@ func (dl *DirectLob) Close() error { } dl.opened = false if C.dpiLob_closeResource(dl.dpiLob) == C.DPI_FAILURE { - return errors.Wrap(dl.conn.getError(), "closeResource") + return errors.Errorf("closeResource: %w", dl.conn.getError()) } return nil } @@ -202,7 +212,7 @@ func (dl *DirectLob) Close() error { func (dl *DirectLob) Size() (int64, error) { var n C.uint64_t if C.dpiLob_getSize(dl.dpiLob, &n) == C.DPI_FAILURE { - return int64(n), errors.Wrap(dl.conn.getError(), "getSize") + return int64(n), errors.Errorf("getSize: %w", dl.conn.getError()) } return int64(n), nil } @@ -210,7 +220,7 @@ func (dl *DirectLob) Size() (int64, error) { // Trim the LOB to the given size. func (dl *DirectLob) Trim(size int64) error { if C.dpiLob_trim(dl.dpiLob, C.uint64_t(size)) == C.DPI_FAILURE { - return errors.Wrap(dl.conn.getError(), "trim") + return errors.Errorf("trim: %w", dl.conn.getError()) } return nil } @@ -219,7 +229,7 @@ func (dl *DirectLob) Trim(size int64) error { // The LOB is cleared first. func (dl *DirectLob) Set(p []byte) error { if C.dpiLob_setFromBytes(dl.dpiLob, (*C.char)(unsafe.Pointer(&p[0])), C.uint64_t(len(p))) == C.DPI_FAILURE { - return errors.Wrap(dl.conn.getError(), "setFromBytes") + return errors.Errorf("setFromBytes: %w", dl.conn.getError()) } return nil } @@ -228,7 +238,7 @@ func (dl *DirectLob) Set(p []byte) error { func (dl *DirectLob) ReadAt(p []byte, offset int64) (int, error) { n := C.uint64_t(len(p)) if C.dpiLob_readBytes(dl.dpiLob, C.uint64_t(offset)+1, n, (*C.char)(unsafe.Pointer(&p[0])), &n) == C.DPI_FAILURE { - return int(n), errors.Wrap(dl.conn.getError(), "readBytes") + return int(n), errors.Errorf("readBytes: %w", dl.conn.getError()) } return int(n), nil } @@ -238,14 +248,14 @@ func (dl *DirectLob) WriteAt(p []byte, offset int64) (int, error) { if !dl.opened { //fmt.Printf("open %p\n", lob) if C.dpiLob_openResource(dl.dpiLob) == C.DPI_FAILURE { - return 0, errors.Wrapf(dl.conn.getError(), "openResources(%p)", dl.dpiLob) + return 0, errors.Errorf("openResources(%p): %w", dl.dpiLob, dl.conn.getError()) } dl.opened = true } n := C.uint64_t(len(p)) if C.dpiLob_writeBytes(dl.dpiLob, C.uint64_t(offset)+1, (*C.char)(unsafe.Pointer(&p[0])), n) == C.DPI_FAILURE { - return int(n), errors.Wrap(dl.conn.getError(), "writeBytes") + return int(n), errors.Errorf("writeBytes: %w", dl.conn.getError()) } return int(n), nil } diff --git a/vendor/gopkg.in/goracle.v2/obj.go b/vendor/github.com/godror/godror/obj.go similarity index 53% rename from vendor/gopkg.in/goracle.v2/obj.go rename to vendor/github.com/godror/godror/obj.go index e8f97afcfd90..17d03d732fa4 100644 --- a/vendor/gopkg.in/goracle.v2/obj.go +++ b/vendor/github.com/godror/godror/obj.go @@ -1,19 +1,9 @@ // Copyright 2017 Tamás Gulácsi // // -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -package goracle +package godror /* #include @@ -21,23 +11,25 @@ package goracle */ import "C" import ( + "context" "fmt" "reflect" + "strings" + "sync" "unsafe" - "github.com/pkg/errors" + errors "golang.org/x/xerrors" ) var _ = fmt.Printf // Object represents a dpiObject. type Object struct { - scratch Data - ObjectType dpiObject *C.dpiObject + ObjectType } -func (O *Object) getError() error { return O.drv.getError() } +func (O *Object) getError() error { return O.conn.getError() } // ErrNoSuchKey is the error for missing key in lookup. var ErrNoSuchKey = errors.New("no such key") @@ -49,45 +41,58 @@ func (O *Object) GetAttribute(data *Data, name string) error { } attr, ok := O.Attributes[name] if !ok { - return errors.Wrap(ErrNoSuchKey, name) + return errors.Errorf("%s: %w", name, ErrNoSuchKey) } data.reset() data.NativeTypeNum = attr.NativeTypeNum data.ObjectType = attr.ObjectType - wasNull := data.dpiData == nil + data.implicitObj = true // the maximum length of that buffer must be supplied // in the value.asBytes.length attribute before calling this function. if attr.NativeTypeNum == C.DPI_NATIVE_TYPE_BYTES && attr.OracleTypeNum == C.DPI_ORACLE_TYPE_NUMBER { - var a [22]byte - C.dpiData_setBytes(data.dpiData, (*C.char)(unsafe.Pointer(&a[0])), 22) + var a [39]byte + C.dpiData_setBytes(&data.dpiData, (*C.char)(unsafe.Pointer(&a[0])), C.uint32_t(len(a))) } + //fmt.Printf("getAttributeValue(%p, %p, %d, %+v)\n", O.dpiObject, attr.dpiObjectAttr, data.NativeTypeNum, data.dpiData) - if C.dpiObject_getAttributeValue(O.dpiObject, attr.dpiObjectAttr, data.NativeTypeNum, data.dpiData) == C.DPI_FAILURE { - if wasNull { - C.free(unsafe.Pointer(data.dpiData)) - data.dpiData = nil - } - return errors.Wrapf(O.getError(), "getAttributeValue(obj=%+v, attr=%+v, typ=%d)", O, attr.dpiObjectAttr, data.NativeTypeNum) + if C.dpiObject_getAttributeValue(O.dpiObject, attr.dpiObjectAttr, data.NativeTypeNum, &data.dpiData) == C.DPI_FAILURE { + return errors.Errorf("getAttributeValue(%q, obj=%+v, attr=%+v, typ=%d): %w", name, O, attr.dpiObjectAttr, data.NativeTypeNum, O.getError()) } //fmt.Printf("getAttributeValue(%p, %q=%p, %d, %+v)\n", O.dpiObject, attr.Name, attr.dpiObjectAttr, data.NativeTypeNum, data.dpiData) return nil } -// SetAttribute sets the i-th attribute with data. +// SetAttribute sets the named attribute with data. func (O *Object) SetAttribute(name string, data *Data) error { + if !strings.Contains(name, `"`) { + name = strings.ToUpper(name) + } attr := O.Attributes[name] if data.NativeTypeNum == 0 { data.NativeTypeNum = attr.NativeTypeNum data.ObjectType = attr.ObjectType } - if C.dpiObject_setAttributeValue(O.dpiObject, attr.dpiObjectAttr, data.NativeTypeNum, data.dpiData) == C.DPI_FAILURE { + if C.dpiObject_setAttributeValue(O.dpiObject, attr.dpiObjectAttr, data.NativeTypeNum, &data.dpiData) == C.DPI_FAILURE { return O.getError() } return nil } -// ResetAttributes prepare all atributes for use the object as IN parameter +// Set is a convenience function to set the named attribute with the given value. +func (O *Object) Set(name string, v interface{}) error { + if data, ok := v.(*Data); ok { + return O.SetAttribute(name, data) + } + d := scratch.Get() + defer scratch.Put(d) + if err := d.Set(v); err != nil { + return err + } + return O.SetAttribute(name, d) +} + +// ResetAttributes prepare all attributes for use the object as IN parameter func (O *Object) ResetAttributes() error { var data Data for _, attr := range O.Attributes { @@ -95,10 +100,10 @@ func (O *Object) ResetAttributes() error { data.NativeTypeNum = attr.NativeTypeNum data.ObjectType = attr.ObjectType if attr.NativeTypeNum == C.DPI_NATIVE_TYPE_BYTES && attr.OracleTypeNum == C.DPI_ORACLE_TYPE_NUMBER { - var a [22]byte - C.dpiData_setBytes(data.dpiData, (*C.char)(unsafe.Pointer(&a[0])), 22) + a := make([]byte, attr.Precision) + C.dpiData_setBytes(&data.dpiData, (*C.char)(unsafe.Pointer(&a[0])), C.uint32_t(attr.Precision)) } - if C.dpiObject_setAttributeValue(O.dpiObject, attr.dpiObjectAttr, data.NativeTypeNum, data.dpiData) == C.DPI_FAILURE { + if C.dpiObject_setAttributeValue(O.dpiObject, attr.dpiObjectAttr, data.NativeTypeNum, &data.dpiData) == C.DPI_FAILURE { return O.getError() } } @@ -108,14 +113,16 @@ func (O *Object) ResetAttributes() error { // Get scans the named attribute into dest, and returns it. func (O *Object) Get(name string) (interface{}, error) { - if err := O.GetAttribute(&O.scratch, name); err != nil { + d := scratch.Get() + defer scratch.Put(d) + if err := O.GetAttribute(d, name); err != nil { return nil, err } - isObject := O.scratch.IsObject() + isObject := d.IsObject() if isObject { - O.scratch.ObjectType = O.Attributes[name].ObjectType + d.ObjectType = O.Attributes[name].ObjectType } - v := O.scratch.Get() + v := d.Get() if !isObject { return v, nil } @@ -131,15 +138,25 @@ func (O *Object) ObjectRef() *Object { return O } +// Collection returns &ObjectCollection{Object: O} iff the Object is a collection. +// Otherwise it returns nil. +func (O *Object) Collection() ObjectCollection { + if O.ObjectType.CollectionOf == nil { + return ObjectCollection{} + } + return ObjectCollection{Object: O} +} + // Close releases a reference to the object. func (O *Object) Close() error { - if O.dpiObject == nil { + obj := O.dpiObject + O.dpiObject = nil + if obj == nil { return nil } - if rc := C.dpiObject_release(O.dpiObject); rc == C.DPI_FAILURE { - return errors.Wrapf(O.getError(), "error on close object") + if C.dpiObject_release(obj) == C.DPI_FAILURE { + return errors.Errorf("error on close object: %w", O.getError()) } - O.dpiObject = nil return nil } @@ -156,21 +173,22 @@ var ErrNotCollection = errors.New("not collection") var ErrNotExist = errors.New("not exist") // AsSlice retrieves the collection into a slice. -func (O *ObjectCollection) AsSlice(dest interface{}) (interface{}, error) { - var data Data +func (O ObjectCollection) AsSlice(dest interface{}) (interface{}, error) { var dr reflect.Value needsInit := dest == nil if !needsInit { dr = reflect.ValueOf(dest) } + d := scratch.Get() + defer scratch.Put(d) for i, err := O.First(); err == nil; i, err = O.Next(i) { if O.CollectionOf.NativeTypeNum == C.DPI_NATIVE_TYPE_OBJECT { - data.ObjectType = *O.CollectionOf + d.ObjectType = *O.CollectionOf } - if err = O.Get(&data, i); err != nil { + if err = O.GetItem(d, i); err != nil { return dest, err } - vr := reflect.ValueOf(data.Get()) + vr := reflect.ValueOf(d.Get()) if needsInit { needsInit = false length, lengthErr := O.Len() @@ -184,31 +202,54 @@ func (O *ObjectCollection) AsSlice(dest interface{}) (interface{}, error) { return dr.Interface(), nil } -// Append data to the collection. -func (O *ObjectCollection) Append(data *Data) error { - if C.dpiObject_appendElement(O.dpiObject, data.NativeTypeNum, data.dpiData) == C.DPI_FAILURE { - return errors.Wrapf(O.getError(), "append(%d)", data.NativeTypeNum) +// AppendData to the collection. +func (O ObjectCollection) AppendData(data *Data) error { + if C.dpiObject_appendElement(O.dpiObject, data.NativeTypeNum, &data.dpiData) == C.DPI_FAILURE { + return errors.Errorf("append(%d): %w", data.NativeTypeNum, O.getError()) } return nil } +// Append v to the collection. +func (O ObjectCollection) Append(v interface{}) error { + if data, ok := v.(*Data); ok { + return O.AppendData(data) + } + d := scratch.Get() + defer scratch.Put(d) + if err := d.Set(v); err != nil { + return err + } + return O.AppendData(d) +} + +// AppendObject adds an Object to the collection. +func (O ObjectCollection) AppendObject(obj *Object) error { + d := scratch.Get() + defer scratch.Put(d) + d.ObjectType = obj.ObjectType + d.NativeTypeNum = C.DPI_NATIVE_TYPE_OBJECT + d.SetObject(obj) + return O.Append(d) +} + // Delete i-th element of the collection. -func (O *ObjectCollection) Delete(i int) error { +func (O ObjectCollection) Delete(i int) error { if C.dpiObject_deleteElementByIndex(O.dpiObject, C.int32_t(i)) == C.DPI_FAILURE { - return errors.Wrapf(O.getError(), "delete(%d)", i) + return errors.Errorf("delete(%d): %w", i, O.getError()) } return nil } -// Get the i-th element of the collection into data. -func (O *ObjectCollection) Get(data *Data, i int) error { +// GetItem gets the i-th element of the collection into data. +func (O ObjectCollection) GetItem(data *Data, i int) error { if data == nil { panic("data cannot be nil") } idx := C.int32_t(i) var exists C.int if C.dpiObject_getElementExistsByIndex(O.dpiObject, idx, &exists) == C.DPI_FAILURE { - return errors.Wrapf(O.getError(), "exists(%d)", idx) + return errors.Errorf("exists(%d): %w", idx, O.getError()) } if exists == 0 { return ErrNotExist @@ -216,26 +257,47 @@ func (O *ObjectCollection) Get(data *Data, i int) error { data.reset() data.NativeTypeNum = O.CollectionOf.NativeTypeNum data.ObjectType = *O.CollectionOf - if C.dpiObject_getElementValueByIndex(O.dpiObject, idx, data.NativeTypeNum, data.dpiData) == C.DPI_FAILURE { - return errors.Wrapf(O.getError(), "get(%d[%d])", idx, data.NativeTypeNum) + data.implicitObj = true + if C.dpiObject_getElementValueByIndex(O.dpiObject, idx, data.NativeTypeNum, &data.dpiData) == C.DPI_FAILURE { + return errors.Errorf("get(%d[%d]): %w", idx, data.NativeTypeNum, O.getError()) } return nil } -// Set the i-th element of the collection with data. -func (O *ObjectCollection) Set(i int, data *Data) error { - if C.dpiObject_setElementValueByIndex(O.dpiObject, C.int32_t(i), data.NativeTypeNum, data.dpiData) == C.DPI_FAILURE { - return errors.Wrapf(O.getError(), "set(%d[%d])", i, data.NativeTypeNum) +// Get the i-th element of the collection. +func (O ObjectCollection) Get(i int) (interface{}, error) { + var data Data + err := O.GetItem(&data, i) + return data.Get(), err +} + +// SetItem sets the i-th element of the collection with data. +func (O ObjectCollection) SetItem(i int, data *Data) error { + if C.dpiObject_setElementValueByIndex(O.dpiObject, C.int32_t(i), data.NativeTypeNum, &data.dpiData) == C.DPI_FAILURE { + return errors.Errorf("set(%d[%d]): %w", i, data.NativeTypeNum, O.getError()) } return nil } +// Set the i-th element of the collection with value. +func (O ObjectCollection) Set(i int, v interface{}) error { + if data, ok := v.(*Data); ok { + return O.SetItem(i, data) + } + d := scratch.Get() + defer scratch.Put(d) + if err := d.Set(v); err != nil { + return err + } + return O.SetItem(i, d) +} + // First returns the first element's index of the collection. -func (O *ObjectCollection) First() (int, error) { +func (O ObjectCollection) First() (int, error) { var exists C.int var idx C.int32_t if C.dpiObject_getFirstIndex(O.dpiObject, &idx, &exists) == C.DPI_FAILURE { - return 0, errors.Wrap(O.getError(), "first") + return 0, errors.Errorf("first: %w", O.getError()) } if exists == 1 { return int(idx), nil @@ -244,11 +306,11 @@ func (O *ObjectCollection) First() (int, error) { } // Last returns the index of the last element. -func (O *ObjectCollection) Last() (int, error) { +func (O ObjectCollection) Last() (int, error) { var exists C.int var idx C.int32_t if C.dpiObject_getLastIndex(O.dpiObject, &idx, &exists) == C.DPI_FAILURE { - return 0, errors.Wrap(O.getError(), "last") + return 0, errors.Errorf("last: %w", O.getError()) } if exists == 1 { return int(idx), nil @@ -257,11 +319,11 @@ func (O *ObjectCollection) Last() (int, error) { } // Next returns the succeeding index of i. -func (O *ObjectCollection) Next(i int) (int, error) { +func (O ObjectCollection) Next(i int) (int, error) { var exists C.int var idx C.int32_t if C.dpiObject_getNextIndex(O.dpiObject, C.int32_t(i), &idx, &exists) == C.DPI_FAILURE { - return 0, errors.Wrapf(O.getError(), "next(%d)", i) + return 0, errors.Errorf("next(%d): %w", i, O.getError()) } if exists == 1 { return int(idx), nil @@ -270,16 +332,16 @@ func (O *ObjectCollection) Next(i int) (int, error) { } // Len returns the length of the collection. -func (O *ObjectCollection) Len() (int, error) { +func (O ObjectCollection) Len() (int, error) { var size C.int32_t if C.dpiObject_getSize(O.dpiObject, &size) == C.DPI_FAILURE { - return 0, errors.Wrap(O.getError(), "len") + return 0, errors.Errorf("len: %w", O.getError()) } return int(size), nil } // Trim the collection to n. -func (O *ObjectCollection) Trim(n int) error { +func (O ObjectCollection) Trim(n int) error { if C.dpiObject_trim(O.dpiObject, C.uint32_t(n)) == C.DPI_FAILURE { return O.getError() } @@ -288,12 +350,14 @@ func (O *ObjectCollection) Trim(n int) error { // ObjectType holds type info of an Object. type ObjectType struct { - Schema, Name string + Schema, Name string + Attributes map[string]ObjectAttribute + + conn *conn + dpiObjectType *C.dpiObjectType + DBSize, ClientSizeInBytes, CharSize int CollectionOf *ObjectType - Attributes map[string]ObjectAttribute - dpiObjectType *C.dpiObjectType - drv *drv OracleTypeNum C.dpiOracleTypeNum NativeTypeNum C.dpiNativeTypeNum Precision int16 @@ -301,7 +365,14 @@ type ObjectType struct { FsPrecision uint8 } -func (t ObjectType) getError() error { return t.drv.getError() } +func (t ObjectType) getError() error { return t.conn.getError() } + +func (t ObjectType) String() string { + if t.Schema == "" { + return t.Name + } + return t.Schema + "." + t.Name +} // FullName returns the object's name with the schame prepended. func (t ObjectType) FullName() string { @@ -312,66 +383,111 @@ func (t ObjectType) FullName() string { } // GetObjectType returns the ObjectType of a name. +// +// The name is uppercased! Because here Oracle seems to be case-sensitive. +// To leave it as is, enclose it in "-s! func (c *conn) GetObjectType(name string) (ObjectType, error) { + c.mu.Lock() + defer c.mu.Unlock() + if !strings.Contains(name, "\"") { + name = strings.ToUpper(name) + } + if o, ok := c.objTypes[name]; ok { + return o, nil + } cName := C.CString(name) defer func() { C.free(unsafe.Pointer(cName)) }() objType := (*C.dpiObjectType)(C.malloc(C.sizeof_void)) if C.dpiConn_getObjectType(c.dpiConn, cName, C.uint32_t(len(name)), &objType) == C.DPI_FAILURE { C.free(unsafe.Pointer(objType)) - return ObjectType{}, errors.Wrapf(c.getError(), "getObjectType(%q) conn=%p", name, c.dpiConn) + return ObjectType{}, errors.Errorf("getObjectType(%q) conn=%p: %w", name, c.dpiConn, c.getError()) + } + t := ObjectType{conn: c, dpiObjectType: objType} + err := t.init() + if err == nil { + c.objTypes[name] = t + c.objTypes[t.FullName()] = t } - t := ObjectType{drv: c.drv, dpiObjectType: objType} - return t, t.init() + return t, err } // NewObject returns a new Object with ObjectType type. +// +// As with all Objects, you MUST call Close on it when not needed anymore! func (t ObjectType) NewObject() (*Object, error) { obj := (*C.dpiObject)(C.malloc(C.sizeof_void)) if C.dpiObjectType_createObject(t.dpiObjectType, &obj) == C.DPI_FAILURE { C.free(unsafe.Pointer(obj)) return nil, t.getError() } - return &Object{ObjectType: t, dpiObject: obj}, nil + O := &Object{ObjectType: t, dpiObject: obj} + // https://github.com/oracle/odpi/issues/112#issuecomment-524479532 + return O, O.ResetAttributes() +} + +// NewCollection returns a new Collection object with ObjectType type. +// If the ObjectType is not a Collection, it returns ErrNotCollection error. +func (t ObjectType) NewCollection() (ObjectCollection, error) { + if t.CollectionOf == nil { + return ObjectCollection{}, ErrNotCollection + } + O, err := t.NewObject() + if err != nil { + return ObjectCollection{}, err + } + return ObjectCollection{Object: O}, nil } // Close releases a reference to the object type. -func (t *ObjectType) Close() error { +func (t *ObjectType) close(doNotReuse bool) error { + if t == nil { + return nil + } + attributes, d := t.Attributes, t.dpiObjectType + t.Attributes, t.dpiObjectType = nil, nil + + if t.CollectionOf != nil { + err := t.CollectionOf.close(false) + if err != nil { + return err + } + } - for _, attr := range t.Attributes { + for _, attr := range attributes { err := attr.Close() if err != nil { return err } } - t.Attributes = nil - d := t.dpiObjectType - t.dpiObjectType = nil - if d == nil { + if d == nil || !doNotReuse { return nil } - if rc := C.dpiObjectType_release(d); rc == C.DPI_FAILURE { - return errors.Wrapf(t.getError(), "error on close object type") + if C.dpiObjectType_release(d) == C.DPI_FAILURE { + return errors.Errorf("error on close object type: %w", t.getError()) } return nil } -func wrapObject(d *drv, objectType *C.dpiObjectType, object *C.dpiObject) (*Object, error) { +func wrapObject(c *conn, objectType *C.dpiObjectType, object *C.dpiObject) (*Object, error) { if objectType == nil { return nil, errors.New("objectType is nil") } + if C.dpiObject_addRef(object) == C.DPI_FAILURE { + return nil, c.getError() + } o := &Object{ - ObjectType: ObjectType{dpiObjectType: objectType, drv: d}, + ObjectType: ObjectType{dpiObjectType: objectType, conn: c}, dpiObject: object, } return o, o.init() } func (t *ObjectType) init() error { - if t.drv == nil { - panic("drv is nil") + if t.conn == nil { + panic("conn is nil") } if t.Name != "" && t.Attributes != nil { return nil @@ -381,15 +497,18 @@ func (t *ObjectType) init() error { } var info C.dpiObjectTypeInfo if C.dpiObjectType_getInfo(t.dpiObjectType, &info) == C.DPI_FAILURE { - return errors.Wrapf(t.getError(), "%v.getInfo", t) + return errors.Errorf("%v.getInfo: %w", t, t.getError()) } t.Schema = C.GoStringN(info.schema, C.int(info.schemaLength)) t.Name = C.GoStringN(info.name, C.int(info.nameLength)) t.CollectionOf = nil - numAttributes := int(info.numAttributes) + if t.conn.objTypes == nil { + t.conn.objTypes = make(map[string]ObjectType) + } + numAttributes := int(info.numAttributes) if info.isCollection == 1 { - t.CollectionOf = &ObjectType{drv: t.drv} + t.CollectionOf = &ObjectType{conn: t.conn} if err := t.CollectionOf.fromDataTypeInfo(info.elementTypeInfo); err != nil { return err } @@ -398,6 +517,10 @@ func (t *ObjectType) init() error { t.CollectionOf.Name = t.Name } } + if ot, ok := t.conn.objTypes[t.FullName()]; ok { + t.Attributes = ot.Attributes + return nil + } if numAttributes == 0 { t.Attributes = map[string]ObjectAttribute{} return nil @@ -408,18 +531,18 @@ func (t *ObjectType) init() error { C.uint16_t(len(attrs)), (**C.dpiObjectAttr)(unsafe.Pointer(&attrs[0])), ) == C.DPI_FAILURE { - return errors.Wrapf(t.getError(), "%v.getAttributes", t) + return errors.Errorf("%v.getAttributes: %w", t, t.getError()) } for i, attr := range attrs { var attrInfo C.dpiObjectAttrInfo if C.dpiObjectAttr_getInfo(attr, &attrInfo) == C.DPI_FAILURE { - return errors.Wrapf(t.getError(), "%v.attr_getInfo", attr) + return errors.Errorf("%v.attr_getInfo: %w", attr, t.getError()) } if Log != nil { Log("i", i, "attrInfo", attrInfo) } typ := attrInfo.typeInfo - sub, err := objectTypeFromDataTypeInfo(t.drv, typ) + sub, err := objectTypeFromDataTypeInfo(t.conn, typ) if err != nil { return err } @@ -447,14 +570,14 @@ func (t *ObjectType) fromDataTypeInfo(typ C.dpiDataTypeInfo) error { t.FsPrecision = uint8(typ.fsPrecision) return t.init() } -func objectTypeFromDataTypeInfo(drv *drv, typ C.dpiDataTypeInfo) (ObjectType, error) { - if drv == nil { - panic("drv nil") +func objectTypeFromDataTypeInfo(conn *conn, typ C.dpiDataTypeInfo) (ObjectType, error) { + if conn == nil { + panic("conn is nil") } if typ.oracleTypeNum == 0 { panic("typ is nil") } - t := ObjectType{drv: drv} + t := ObjectType{conn: conn} err := t.fromDataTypeInfo(typ) return t, err } @@ -469,22 +592,35 @@ type ObjectAttribute struct { // Close the ObjectAttribute. func (A ObjectAttribute) Close() error { attr := A.dpiObjectAttr + A.dpiObjectAttr = nil + if attr == nil { return nil } - - A.dpiObjectAttr = nil if C.dpiObjectAttr_release(attr) == C.DPI_FAILURE { return A.getError() } + if A.ObjectType.dpiObjectType != nil { + err := A.ObjectType.close(false) + if err != nil { + return err + } + } return nil } // GetObjectType returns the ObjectType for the name. -func GetObjectType(ex Execer, typeName string) (ObjectType, error) { - c, err := getConn(ex) +func GetObjectType(ctx context.Context, ex Execer, typeName string) (ObjectType, error) { + c, err := getConn(ctx, ex) if err != nil { - return ObjectType{}, errors.WithMessage(err, "getConn for "+typeName) + return ObjectType{}, errors.Errorf("getConn for %s: %w", typeName, err) } return c.GetObjectType(typeName) } + +var scratch = &dataPool{Pool: sync.Pool{New: func() interface{} { return &Data{} }}} + +type dataPool struct{ sync.Pool } + +func (dp *dataPool) Get() *Data { return dp.Pool.Get().(*Data) } +func (dp *dataPool) Put(d *Data) { d.reset(); dp.Pool.Put(d) } diff --git a/vendor/gopkg.in/goracle.v2/odpi/CONTRIBUTING.md b/vendor/github.com/godror/godror/odpi/CONTRIBUTING.md similarity index 100% rename from vendor/gopkg.in/goracle.v2/odpi/CONTRIBUTING.md rename to vendor/github.com/godror/godror/odpi/CONTRIBUTING.md diff --git a/vendor/gopkg.in/goracle.v2/odpi/LICENSE.md b/vendor/github.com/godror/godror/odpi/LICENSE.md similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/LICENSE.md rename to vendor/github.com/godror/godror/odpi/LICENSE.md index 20b6fc956fac..cb344b76c16f 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/LICENSE.md +++ b/vendor/github.com/godror/godror/odpi/LICENSE.md @@ -215,4 +215,3 @@ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS - diff --git a/vendor/gopkg.in/goracle.v2/odpi/README.md b/vendor/github.com/godror/godror/odpi/README.md similarity index 95% rename from vendor/gopkg.in/goracle.v2/odpi/README.md rename to vendor/github.com/godror/godror/odpi/README.md index a8e1decf845e..fa911cc21303 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/README.md +++ b/vendor/github.com/godror/godror/odpi/README.md @@ -1,4 +1,4 @@ -# ODPI-C version 3.1 +# ODPI-C version 3.3 Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies access to Oracle Database for applications written in @@ -48,6 +48,7 @@ Third-party Drivers: * [ruby-ODPI ](https://github.com/kubo/ruby-odpi) Ruby Interface. * [rust-oracle ](https://github.com/kubo/rust-oracle) Driver for Rust. * [Oracle.jl](https://github.com/felipenoris/Oracle.jl) Driver for Julia. +* [oranif](https://github.com/K2InformaticsGmbH/oranif) Driver for Erlang. ## License diff --git a/vendor/gopkg.in/goracle.v2/odpi/embed/README.md b/vendor/github.com/godror/godror/odpi/embed/README.md similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/embed/README.md rename to vendor/github.com/godror/godror/odpi/embed/README.md index 5dc0fb5ca152..dfe0b4524c0a 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/embed/README.md +++ b/vendor/github.com/godror/godror/odpi/embed/README.md @@ -1,4 +1,3 @@ This directory contains the file dpi.c which can be used to embed ODPI-C within your project without having to manage the individual files that make up the library. The files can also be compiled independently if that is preferred. - diff --git a/vendor/gopkg.in/goracle.v2/odpi/embed/dpi.c b/vendor/github.com/godror/godror/odpi/embed/dpi.c similarity index 98% rename from vendor/gopkg.in/goracle.v2/odpi/embed/dpi.c rename to vendor/github.com/godror/godror/odpi/embed/dpi.c index e47f3f214609..7d0c6dc83bbf 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/embed/dpi.c +++ b/vendor/github.com/godror/godror/odpi/embed/dpi.c @@ -37,6 +37,7 @@ #include "../src/dpiOci.c" #include "../src/dpiOracleType.c" #include "../src/dpiPool.c" +#include "../src/dpiQueue.c" #include "../src/dpiRowid.c" #include "../src/dpiSodaColl.c" #include "../src/dpiSodaCollCursor.c" @@ -47,4 +48,3 @@ #include "../src/dpiSubscr.c" #include "../src/dpiUtils.c" #include "../src/dpiVar.c" - diff --git a/vendor/gopkg.in/goracle.v2/odpi/include/dpi.h b/vendor/github.com/godror/godror/odpi/include/dpi.h similarity index 96% rename from vendor/gopkg.in/goracle.v2/odpi/include/dpi.h rename to vendor/github.com/godror/godror/odpi/include/dpi.h index 8606774900c0..58b3a2d58673 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/include/dpi.h +++ b/vendor/github.com/godror/godror/odpi/include/dpi.h @@ -44,8 +44,8 @@ // define ODPI-C version information #define DPI_MAJOR_VERSION 3 -#define DPI_MINOR_VERSION 1 -#define DPI_PATCH_LEVEL 4 +#define DPI_MINOR_VERSION 3 +#define DPI_PATCH_LEVEL 0 #define DPI_VERSION_SUFFIX #define DPI_STR_HELPER(x) #x @@ -152,7 +152,6 @@ typedef uint32_t dpiEventType; #define DPI_EVENT_STARTUP 1 #define DPI_EVENT_SHUTDOWN 2 #define DPI_EVENT_SHUTDOWN_ANY 3 -#define DPI_EVENT_DROP_DB 4 #define DPI_EVENT_DEREG 5 #define DPI_EVENT_OBJCHANGE 6 #define DPI_EVENT_QUERYCHANGE 7 @@ -414,6 +413,7 @@ typedef struct dpiObjectAttrInfo dpiObjectAttrInfo; typedef struct dpiObjectTypeInfo dpiObjectTypeInfo; typedef struct dpiPoolCreateParams dpiPoolCreateParams; typedef struct dpiQueryInfo dpiQueryInfo; +typedef struct dpiQueue dpiQueue; typedef struct dpiShardingKeyColumn dpiShardingKeyColumn; typedef struct dpiSodaColl dpiSodaColl; typedef struct dpiSodaCollNames dpiSodaCollNames; @@ -571,6 +571,7 @@ struct dpiPoolCreateParams { uint32_t maxLifetimeSession; const char *plsqlFixupCallback; uint32_t plsqlFixupCallbackLength; + uint32_t maxSessionsPerShard; }; // structure used for transferring query metadata from ODPI-C @@ -642,6 +643,8 @@ struct dpiSubscrCreateParams { uint8_t groupingClass; uint32_t groupingValue; uint8_t groupingType; + uint64_t outRegId; + int clientInitiated; }; // structure used for transferring messages in subscription callbacks @@ -836,6 +839,10 @@ int dpiConn_newEnqOptions(dpiConn *conn, dpiEnqOptions **options); // create a new message properties object and return it int dpiConn_newMsgProps(dpiConn *conn, dpiMsgProps **props); +// create a new AQ queue +int dpiConn_newQueue(dpiConn *conn, const char *name, uint32_t nameLength, + dpiObjectType *payloadType, dpiQueue **queue); + // create a new temporary LOB int dpiConn_newTempLob(dpiConn *conn, dpiOracleTypeNum lobType, dpiLob **lob); @@ -1202,10 +1209,18 @@ int dpiMsgProps_getExceptionQ(dpiMsgProps *props, const char **value, // return the number of seconds until the message expires int dpiMsgProps_getExpiration(dpiMsgProps *props, int32_t *value); +// return the message id for the message (after enqueuing or dequeuing) +int dpiMsgProps_getMsgId(dpiMsgProps *props, const char **value, + uint32_t *valueLength); + // return the original message id for the message int dpiMsgProps_getOriginalMsgId(dpiMsgProps *props, const char **value, uint32_t *valueLength); +// return the payload of the message (object or bytes) +int dpiMsgProps_getPayload(dpiMsgProps *props, dpiObject **obj, + const char **value, uint32_t *valueLength); + // return the priority of the message int dpiMsgProps_getPriority(dpiMsgProps *props, int32_t *value); @@ -1233,6 +1248,13 @@ int dpiMsgProps_setExpiration(dpiMsgProps *props, int32_t value); int dpiMsgProps_setOriginalMsgId(dpiMsgProps *props, const char *value, uint32_t valueLength); +// set the payload of the message (as a series of bytes) +int dpiMsgProps_setPayloadBytes(dpiMsgProps *props, const char *value, + uint32_t valueLength); + +// set the payload of the message (as an object) +int dpiMsgProps_setPayloadObject(dpiMsgProps *props, dpiObject *obj); + // set the priority of the message int dpiMsgProps_setPriority(dpiMsgProps *props, int32_t value); @@ -1398,6 +1420,35 @@ int dpiPool_setTimeout(dpiPool *pool, uint32_t value); int dpiPool_setWaitTimeout(dpiPool *pool, uint32_t value); +//----------------------------------------------------------------------------- +// AQ Queue Methods (dpiQueue) +//----------------------------------------------------------------------------- + +// add a reference to the queue +int dpiQueue_addRef(dpiQueue *queue); + +// dequeue multiple messages from the queue +int dpiQueue_deqMany(dpiQueue *queue, uint32_t *numProps, dpiMsgProps **props); + +// dequeue a single message from the queue +int dpiQueue_deqOne(dpiQueue *queue, dpiMsgProps **props); + +// enqueue multiple message to the queue +int dpiQueue_enqMany(dpiQueue *queue, uint32_t numProps, dpiMsgProps **props); + +// enqueue a single message to the queue +int dpiQueue_enqOne(dpiQueue *queue, dpiMsgProps *props); + +// get a reference to the dequeue options associated with the queue +int dpiQueue_getDeqOptions(dpiQueue *queue, dpiDeqOptions **options); + +// get a reference to the enqueue options associated with the queue +int dpiQueue_getEnqOptions(dpiQueue *queue, dpiEnqOptions **options); + +// release a reference to the queue +int dpiQueue_release(dpiQueue *queue); + + //----------------------------------------------------------------------------- // SODA Collection Methods (dpiSodaColl) //----------------------------------------------------------------------------- @@ -1440,6 +1491,10 @@ int dpiSodaColl_getMetadata(dpiSodaColl *coll, const char **value, int dpiSodaColl_getName(dpiSodaColl *coll, const char **value, uint32_t *valueLength); +// insert multiple documents into the SODA collection +int dpiSodaColl_insertMany(dpiSodaColl *coll, uint32_t numDocs, + dpiSodaDoc **docs, uint32_t flags, dpiSodaDoc **insertedDocs); + // insert a document into the SODA collection int dpiSodaColl_insertOne(dpiSodaColl *coll, dpiSodaDoc *doc, uint32_t flags, dpiSodaDoc **insertedDoc); @@ -1644,6 +1699,9 @@ int dpiStmt_getImplicitResult(dpiStmt *stmt, dpiStmt **implicitResult); // return information about the statement int dpiStmt_getInfo(dpiStmt *stmt, dpiStmtInfo *info); +// get the rowid of the last row affected by a DML statement +int dpiStmt_getLastRowid(dpiStmt *stmt, dpiRowid **rowid); + // get the number of query columns (zero implies the statement is not a query) int dpiStmt_getNumQueryColumns(dpiStmt *stmt, uint32_t *numQueryColumns); @@ -1754,4 +1812,3 @@ int dpiVar_setFromStmt(dpiVar *var, uint32_t pos, dpiStmt *stmt); int dpiVar_setNumElementsInArray(dpiVar *var, uint32_t numElements); #endif - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiConn.c b/vendor/github.com/godror/godror/odpi/src/dpiConn.c similarity index 92% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiConn.c rename to vendor/github.com/godror/godror/odpi/src/dpiConn.c index e1c8edbe841e..faa5dc5266b2 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiConn.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiConn.c @@ -37,8 +37,7 @@ static int dpiConn__getSession(dpiConn *conn, uint32_t mode, static int dpiConn__setAttributesFromCreateParams(dpiConn *conn, void *handle, uint32_t handleType, const char *userName, uint32_t userNameLength, const char *password, uint32_t passwordLength, - const dpiConnCreateParams *params, void **shardingKey, - void **superShardingKey, dpiError *error); + const dpiConnCreateParams *params, dpiError *error); static int dpiConn__setShardingKey(dpiConn *conn, void **shardingKey, void *handle, uint32_t handleType, uint32_t attribute, const char *action, dpiShardingKeyColumn *columns, uint8_t numColumns, @@ -65,23 +64,6 @@ static int dpiConn__attachExternal(dpiConn *conn, void *externalHandle, return DPI_FAILURE; } - // allocate a new service context handle which will use the new environment - // handle independent of the original service context handle - conn->handle = NULL; - if (dpiOci__handleAlloc(conn->env->handle, &conn->handle, - DPI_OCI_HTYPE_SVCCTX, "allocate service context handle", - error) < 0) - return DPI_FAILURE; - - // set these handles on the newly created service context - if (dpiOci__attrSet(conn->handle, DPI_OCI_HTYPE_SVCCTX, conn->serverHandle, - 0, DPI_OCI_ATTR_SERVER, "set server handle", error) < 0) - return DPI_FAILURE; - if (dpiOci__attrSet(conn->handle, DPI_OCI_HTYPE_SVCCTX, - conn->sessionHandle, 0, DPI_OCI_ATTR_SESSION, "set session handle", - error) < 0) - return DPI_FAILURE; - return DPI_SUCCESS; } @@ -93,7 +75,7 @@ static int dpiConn__attachExternal(dpiConn *conn, void *externalHandle, //----------------------------------------------------------------------------- static int dpiConn__check(dpiConn *conn, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(conn, DPI_HTYPE_CONN, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(conn, DPI_HTYPE_CONN, fnName, error) < 0) return DPI_FAILURE; return dpiConn__checkConnected(conn, error); } @@ -224,8 +206,6 @@ static int dpiConn__close(dpiConn *conn, uint32_t mode, const char *tag, // handle connections created with an external handle if (conn->externalHandle) { - if (conn->handle) - dpiOci__handleFree(conn->handle, DPI_OCI_HTYPE_SVCCTX); conn->sessionHandle = NULL; // handle standalone connections @@ -254,7 +234,8 @@ static int dpiConn__close(dpiConn *conn, uint32_t mode, const char *tag, // update last time used (if the session isn't going to be dropped) // clear last time used (if the session is going to be dropped) - if (conn->sessionHandle) { + // do nothing, however, if not using a pool or the pool is being closed + if (conn->sessionHandle && conn->pool && conn->pool->handle) { // get the pointer from the context associated with the session lastTimeUsed = NULL; @@ -315,9 +296,20 @@ static int dpiConn__close(dpiConn *conn, uint32_t mode, const char *tag, conn->sessionHandle = NULL; } - conn->handle = NULL; conn->serverHandle = NULL; + + // destroy sharding and super sharding key descriptors, if applicable + if (conn->shardingKey) { + dpiOci__descriptorFree(conn->shardingKey, DPI_OCI_DTYPE_SHARDING_KEY); + conn->shardingKey = NULL; + } + if (conn->superShardingKey) { + dpiOci__descriptorFree(conn->superShardingKey, + DPI_OCI_DTYPE_SHARDING_KEY); + conn->superShardingKey = NULL; + } + return DPI_SUCCESS; } @@ -333,6 +325,8 @@ int dpiConn__create(dpiConn *conn, const dpiContext *context, const dpiCommonCreateParams *commonParams, dpiConnCreateParams *createParams, dpiError *error) { + void *envHandle = NULL; + // allocate handle lists for statements, LOBs and objects if (dpiHandleList__create(&conn->openStmts, error) < 0) return DPI_FAILURE; @@ -341,8 +335,29 @@ int dpiConn__create(dpiConn *conn, const dpiContext *context, if (dpiHandleList__create(&conn->objects, error) < 0) return DPI_FAILURE; + // if an external service context handle is provided, acquire the + // environment handle from it; need a temporary environment handle in order + // to do so + if (createParams->externalHandle) { + error->env = conn->env; + if (dpiOci__envNlsCreate(&conn->env->handle, DPI_OCI_DEFAULT, 0, 0, + error) < 0) + return DPI_FAILURE; + if (dpiOci__handleAlloc(conn->env->handle, &error->handle, + DPI_OCI_HTYPE_ERROR, "allocate temp OCI error", error) < 0) + return DPI_FAILURE; + if (dpiOci__attrGet(createParams->externalHandle, DPI_OCI_HTYPE_SVCCTX, + &envHandle, NULL, DPI_OCI_ATTR_ENV, "get env handle", + error) < 0) + return DPI_FAILURE; + dpiOci__handleFree(conn->env->handle, DPI_OCI_HTYPE_ENV); + error->handle = NULL; + conn->env->handle = NULL; + } + // initialize environment (for non-pooled connections) - if (!pool && dpiEnv__init(conn->env, context, commonParams, error) < 0) + if (!pool && dpiEnv__init(conn->env, context, commonParams, envHandle, + error) < 0) return DPI_FAILURE; // if a handle is specified, use it @@ -416,7 +431,7 @@ static int dpiConn__createStandalone(dpiConn *conn, const char *userName, // populate attributes on the session handle if (dpiConn__setAttributesFromCreateParams(conn, conn->sessionHandle, DPI_OCI_HTYPE_SESSION, userName, userNameLength, password, - passwordLength, createParams, NULL, NULL, error) < 0) + passwordLength, createParams, error) < 0) return DPI_FAILURE; // set the session handle on the service context handle @@ -505,11 +520,16 @@ static int dpiConn__get(dpiConn *conn, const char *userName, const char *connectString, uint32_t connectStringLength, dpiConnCreateParams *createParams, dpiPool *pool, dpiError *error) { - void *shardingKey = NULL, *superShardingKey = NULL; int externalAuth, status; void *authInfo; uint32_t mode; + // clear pointers if length is 0 + if (userNameLength == 0) + userName = NULL; + if (passwordLength == 0) + password = NULL; + // set things up for the call to acquire a session if (pool) { dpiGen__setRefCount(pool, error, 1); @@ -547,8 +567,7 @@ static int dpiConn__get(dpiConn *conn, const char *userName, // set attributes for create parameters if (dpiConn__setAttributesFromCreateParams(conn, authInfo, DPI_OCI_HTYPE_AUTHINFO, userName, userNameLength, password, - passwordLength, createParams, &shardingKey, &superShardingKey, - error) < 0) { + passwordLength, createParams, error) < 0) { dpiOci__handleFree(authInfo, DPI_OCI_HTYPE_AUTHINFO); return DPI_FAILURE; } @@ -556,13 +575,6 @@ static int dpiConn__get(dpiConn *conn, const char *userName, // get a session from the pool status = dpiConn__getSession(conn, mode, connectString, connectStringLength, createParams, authInfo, error); - if (status == DPI_SUCCESS && pool) { - if (shardingKey) - dpiOci__descriptorFree(shardingKey, DPI_OCI_DTYPE_SHARDING_KEY); - if (superShardingKey) - dpiOci__descriptorFree(superShardingKey, - DPI_OCI_DTYPE_SHARDING_KEY); - } dpiOci__handleFree(authInfo, DPI_OCI_HTYPE_AUTHINFO); if (status < 0) return status; @@ -630,6 +642,19 @@ static int dpiConn__getHandles(dpiConn *conn, dpiError *error) } +//----------------------------------------------------------------------------- +// dpiConn__getRawTDO() [INTERNAL] +// Internal method used for ensuring that the RAW TDO has been cached on the +//connection. +//----------------------------------------------------------------------------- +int dpiConn__getRawTDO(dpiConn *conn, dpiError *error) +{ + if (conn->rawTDO) + return DPI_SUCCESS; + return dpiOci__typeByName(conn, "SYS", 3, "RAW", 3, &conn->rawTDO, error); +} + + //----------------------------------------------------------------------------- // dpiConn__getServerCharset() [INTERNAL] // Internal method used for retrieving the server character set. This is used @@ -850,8 +875,7 @@ static int dpiConn__setAppContext(void *handle, uint32_t handleType, static int dpiConn__setAttributesFromCreateParams(dpiConn *conn, void *handle, uint32_t handleType, const char *userName, uint32_t userNameLength, const char *password, uint32_t passwordLength, - const dpiConnCreateParams *params, void **shardingKey, - void **superShardingKey, dpiError *error) + const dpiConnCreateParams *params, dpiError *error) { uint32_t purity; @@ -882,17 +906,20 @@ static int dpiConn__setAttributesFromCreateParams(dpiConn *conn, void *handle, // set sharding key and super sharding key parameters if (params->shardingKeyColumns && params->numShardingKeyColumns > 0) { - if (dpiConn__setShardingKey(conn, shardingKey, handle, handleType, - DPI_OCI_ATTR_SHARDING_KEY, "set sharding key", + if (dpiConn__setShardingKey(conn, &conn->shardingKey, handle, + handleType, DPI_OCI_ATTR_SHARDING_KEY, "set sharding key", params->shardingKeyColumns, params->numShardingKeyColumns, error) < 0) return DPI_FAILURE; } if (params->superShardingKeyColumns && params->numSuperShardingKeyColumns > 0) { - if (dpiConn__setShardingKey(conn, superShardingKey, handle, handleType, - DPI_OCI_ATTR_SUPER_SHARDING_KEY, "set super sharding key", - params->superShardingKeyColumns, + if (params->numShardingKeyColumns == 0) + return dpiError__set(error, "ensure sharding key", + DPI_ERR_MISSING_SHARDING_KEY); + if (dpiConn__setShardingKey(conn, &conn->superShardingKey, handle, + handleType, DPI_OCI_ATTR_SUPER_SHARDING_KEY, + "set super sharding key", params->superShardingKeyColumns, params->numSuperShardingKeyColumns, error) < 0) return DPI_FAILURE; } @@ -995,13 +1022,14 @@ static int dpiConn__setShardingKey(dpiConn *conn, void **shardingKey, static int dpiConn__setShardingKeyValue(dpiConn *conn, void *shardingKey, dpiShardingKeyColumn *column, dpiError *error) { + dpiShardingOciDate shardingDateValue; + uint32_t colLen = 0, descType = 0; const dpiOracleType *oracleType; dpiOciNumber numberValue; + int convertOk, status; dpiOciDate dateValue; - uint32_t colLen = 0; void *col = NULL; uint16_t colType; - int convertOk; oracleType = dpiOracleType__getFromNum(column->oracleTypeNum, error); if (!oracleType) @@ -1044,14 +1072,61 @@ static int dpiConn__setShardingKeyValue(dpiConn *conn, void *shardingKey, } break; case DPI_ORACLE_TYPE_DATE: - col = &dateValue; - colLen = sizeof(dateValue); - colType = DPI_SQLT_DAT; if (column->nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) { if (dpiDataBuffer__toOracleDate(&column->value, &dateValue) < 0) return DPI_FAILURE; convertOk = 1; + } else if (column->nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) { + if (dpiDataBuffer__toOracleDateFromDouble(&column->value, + conn->env, error, &dateValue) < 0) + return DPI_FAILURE; + convertOk = 1; + } + + // for sharding only, the type must be SQLT_DAT, which uses a + // different format for storing the date values + if (convertOk) { + col = &shardingDateValue; + colLen = sizeof(shardingDateValue); + colType = DPI_SQLT_DAT; + shardingDateValue.century = + ((uint8_t) (dateValue.year / 100)) + 100; + shardingDateValue.year = (dateValue.year % 100) + 100; + shardingDateValue.month = dateValue.month; + shardingDateValue.day = dateValue.day; + shardingDateValue.hour = dateValue.hour + 1; + shardingDateValue.minute = dateValue.minute + 1; + shardingDateValue.second = dateValue.second + 1; + } + break; + case DPI_ORACLE_TYPE_TIMESTAMP: + case DPI_ORACLE_TYPE_TIMESTAMP_TZ: + case DPI_ORACLE_TYPE_TIMESTAMP_LTZ: + colLen = sizeof(void*); + colType = DPI_SQLT_TIMESTAMP; + if (column->nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) { + descType = DPI_OCI_DTYPE_TIMESTAMP; + if (dpiOci__descriptorAlloc(conn->env->handle, &col, descType, + "alloc timestamp", error) < 0) + return DPI_FAILURE; + if (dpiDataBuffer__toOracleTimestamp(&column->value, conn->env, + error, col, 0) < 0) { + dpiOci__descriptorFree(col, descType); + return DPI_FAILURE; + } + convertOk = 1; + } else if (column->nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) { + descType = DPI_OCI_DTYPE_TIMESTAMP_LTZ; + if (dpiOci__descriptorAlloc(conn->env->handle, &col, descType, + "alloc LTZ timestamp", error) < 0) + return DPI_FAILURE; + if (dpiDataBuffer__toOracleTimestampFromDouble(&column->value, + conn->env, error, col) < 0) { + dpiOci__descriptorFree(col, descType); + return DPI_FAILURE; + } + convertOk = 1; } break; default: @@ -1060,8 +1135,11 @@ static int dpiConn__setShardingKeyValue(dpiConn *conn, void *shardingKey, if (!convertOk) return dpiError__set(error, "check type", DPI_ERR_NOT_SUPPORTED); - return dpiOci__shardingKeyColumnAdd(shardingKey, col, colLen, colType, + status = dpiOci__shardingKeyColumnAdd(shardingKey, col, colLen, colType, error); + if (descType) + dpiOci__descriptorFree(col, descType); + return status; } @@ -1282,7 +1360,7 @@ int dpiConn_create(const dpiContext *context, const char *userName, int status; // validate parameters - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(context, conn) @@ -1347,8 +1425,6 @@ int dpiConn_create(const dpiContext *context, const char *userName, dpiError__set(&error, "check pool", DPI_ERR_NOT_CONNECTED); return dpiGen__endPublicFn(context, DPI_FAILURE, &error); } - if (dpiEnv__initError(createParams->pool->env, &error) < 0) - return dpiGen__endPublicFn(context, DPI_FAILURE, &error); status = dpiPool__acquireConnection(createParams->pool, userName, userNameLength, password, passwordLength, createParams, conn, &error); @@ -1366,8 +1442,7 @@ int dpiConn_create(const dpiContext *context, const char *userName, } *conn = tempConn; - dpiHandlePool__release(tempConn->env->errorHandles, error.handle, &error); - error.handle = NULL; + dpiHandlePool__release(tempConn->env->errorHandles, &error.handle); return dpiGen__endPublicFn(context, DPI_SUCCESS, &error); } @@ -1404,7 +1479,6 @@ int dpiConn_deqObject(dpiConn *conn, const char *queueName, uint32_t queueNameLength, dpiDeqOptions *options, dpiMsgProps *props, dpiObject *payload, const char **msgId, uint32_t *msgIdLength) { - void *ociMsgId = NULL; dpiError error; // validate parameters @@ -1426,19 +1500,15 @@ int dpiConn_deqObject(dpiConn *conn, const char *queueName, // dequeue message if (dpiOci__aqDeq(conn, queueName, options->handle, props->handle, payload->type->tdo, &payload->instance, &payload->indicator, - &ociMsgId, &error) < 0) { + &props->msgIdRaw, &error) < 0) { if (error.buffer->code == 25228) { - if (ociMsgId) - dpiOci__rawResize(conn->env->handle, &ociMsgId, 0, &error); *msgId = NULL; *msgIdLength = 0; return dpiGen__endPublicFn(conn, DPI_SUCCESS, &error); } return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); } - if (dpiMsgProps__extractMsgId(props, ociMsgId, msgId, msgIdLength, - &error) < 0) - return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); + dpiMsgProps__extractMsgId(props, msgId, msgIdLength); return dpiGen__endPublicFn(conn, DPI_SUCCESS, &error); } @@ -1451,7 +1521,6 @@ int dpiConn_enqObject(dpiConn *conn, const char *queueName, uint32_t queueNameLength, dpiEnqOptions *options, dpiMsgProps *props, dpiObject *payload, const char **msgId, uint32_t *msgIdLength) { - void *ociMsgId = NULL; dpiError error; // validate parameters @@ -1473,11 +1542,9 @@ int dpiConn_enqObject(dpiConn *conn, const char *queueName, // enqueue message if (dpiOci__aqEnq(conn, queueName, options->handle, props->handle, payload->type->tdo, &payload->instance, &payload->indicator, - &ociMsgId, &error) < 0) - return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); - if (dpiMsgProps__extractMsgId(props, ociMsgId, msgId, msgIdLength, - &error) < 0) + &props->msgIdRaw, &error) < 0) return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); + dpiMsgProps__extractMsgId(props, msgId, msgIdLength); return dpiGen__endPublicFn(conn, DPI_SUCCESS, &error); } @@ -1682,15 +1749,15 @@ int dpiConn_getServerVersion(dpiConn *conn, const char **releaseString, // validate parameters if (dpiConn__check(conn, __func__, &error) < 0) return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); - DPI_CHECK_PTR_NOT_NULL(conn, releaseString) - DPI_CHECK_PTR_NOT_NULL(conn, releaseStringLength) DPI_CHECK_PTR_NOT_NULL(conn, versionInfo) // get server version if (dpiConn__getServerVersion(conn, &error) < 0) return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); - *releaseString = conn->releaseString; - *releaseStringLength = conn->releaseStringLength; + if (releaseString) + *releaseString = conn->releaseString; + if (releaseStringLength) + *releaseStringLength = conn->releaseStringLength; memcpy(versionInfo, &conn->versionInfo, sizeof(dpiVersionInfo)); return dpiGen__endPublicFn(conn, DPI_SUCCESS, &error); } @@ -1806,22 +1873,34 @@ int dpiConn_newTempLob(dpiConn *conn, dpiOracleTypeNum lobType, dpiLob **lob) //----------------------------------------------------------------------------- int dpiConn_newMsgProps(dpiConn *conn, dpiMsgProps **props) { - dpiMsgProps *tempProps; dpiError error; + int status; if (dpiConn__check(conn, __func__, &error) < 0) return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(conn, props) - if (dpiGen__allocate(DPI_HTYPE_MSG_PROPS, conn->env, (void**) &tempProps, - &error) < 0) - return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); - if (dpiMsgProps__create(tempProps, conn, &error) < 0) { - dpiMsgProps__free(tempProps, &error); - return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); - } + status = dpiMsgProps__allocate(conn, props, &error); + return dpiGen__endPublicFn(conn, status, &error); +} - *props = tempProps; - return dpiGen__endPublicFn(conn, DPI_SUCCESS, &error); + +//----------------------------------------------------------------------------- +// dpiConn_newQueue() [PUBLIC] +// Create a new AQ queue object and return it. +//----------------------------------------------------------------------------- +int dpiConn_newQueue(dpiConn *conn, const char *name, uint32_t nameLength, + dpiObjectType *payloadType, dpiQueue **queue) +{ + dpiError error; + int status; + + if (dpiConn__check(conn, __func__, &error) < 0) + return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); + DPI_CHECK_PTR_AND_LENGTH(conn, name) + DPI_CHECK_PTR_NOT_NULL(conn, queue) + status = dpiQueue__allocate(conn, name, nameLength, payloadType, queue, + &error); + return dpiGen__endPublicFn(conn, status, &error); } @@ -2148,6 +2227,7 @@ int dpiConn_subscribe(dpiConn *conn, dpiSubscrCreateParams *params, int dpiConn_unsubscribe(dpiConn *conn, dpiSubscr *subscr) { dpiError error; + int status; if (dpiConn__check(conn, __func__, &error) < 0) return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); @@ -2155,12 +2235,15 @@ int dpiConn_unsubscribe(dpiConn *conn, dpiSubscr *subscr) &error) < 0) return dpiGen__endPublicFn(conn, DPI_FAILURE, &error); if (subscr->registered) { - if (dpiOci__subscriptionUnRegister(conn, subscr, &error) < 0) + dpiMutex__acquire(subscr->mutex); + status = dpiOci__subscriptionUnRegister(conn, subscr, &error); + if (status == DPI_SUCCESS) + subscr->registered = 0; + dpiMutex__release(subscr->mutex); + if (status < 0) return dpiGen__endPublicFn(subscr, DPI_FAILURE, &error); - subscr->registered = 0; } dpiGen__setRefCount(subscr, &error, -1); return dpiGen__endPublicFn(subscr, DPI_SUCCESS, &error); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiContext.c b/vendor/github.com/godror/godror/odpi/src/dpiContext.c similarity index 92% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiContext.c rename to vendor/github.com/godror/godror/odpi/src/dpiContext.c index 315afe32ea95..e9adb18a33ef 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiContext.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiContext.c @@ -156,7 +156,7 @@ int dpiContext_destroy(dpiContext *context) char message[80]; dpiError error; - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); dpiUtils__clearMemory(&context->checkInt, sizeof(context->checkInt)); @@ -181,7 +181,7 @@ int dpiContext_getClientVersion(const dpiContext *context, { dpiError error; - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(context, versionInfo) @@ -214,7 +214,7 @@ int dpiContext_initCommonCreateParams(const dpiContext *context, { dpiError error; - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(context, params) @@ -233,7 +233,7 @@ int dpiContext_initConnCreateParams(const dpiContext *context, dpiConnCreateParams localParams; dpiError error; - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(context, params) @@ -259,17 +259,22 @@ int dpiContext_initPoolCreateParams(const dpiContext *context, dpiPoolCreateParams localParams; dpiError error; - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(context, params) - // size changed in version 3.1; can be dropped once version 4 released - if (context->dpiMinorVersion > 0) + // size changed in versions 3.1 and 3.3 + // changes can be dropped once version 4 released + if (context->dpiMinorVersion > 2) { dpiContext__initPoolCreateParams(params); - else { + } else { dpiContext__initPoolCreateParams(&localParams); - memcpy(params, &localParams, sizeof(dpiPoolCreateParams__v30)); + if (context->dpiMinorVersion > 0) { + memcpy(params, &localParams, sizeof(dpiPoolCreateParams__v32)); + } else { + memcpy(params, &localParams, sizeof(dpiPoolCreateParams__v30)); + } } return dpiGen__endPublicFn(context, DPI_SUCCESS, &error); } @@ -284,7 +289,7 @@ int dpiContext_initSodaOperOptions(const dpiContext *context, { dpiError error; - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(context, options) @@ -300,13 +305,25 @@ int dpiContext_initSodaOperOptions(const dpiContext *context, int dpiContext_initSubscrCreateParams(const dpiContext *context, dpiSubscrCreateParams *params) { + dpiSubscrCreateParams localParams; dpiError error; - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(context, params) - dpiContext__initSubscrCreateParams(params); + + // size changed in versions 3.2 and 3.3 + // changes can be dropped once version 4 released + if (context->dpiMinorVersion > 2) { + dpiContext__initSubscrCreateParams(params); + } else { + dpiContext__initSubscrCreateParams(&localParams); + if (context->dpiMinorVersion > 1) { + memcpy(params, &localParams, sizeof(dpiSubscrCreateParams__v32)); + } else { + memcpy(params, &localParams, sizeof(dpiSubscrCreateParams__v30)); + } + } return dpiGen__endPublicFn(context, DPI_SUCCESS, &error); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiData.c b/vendor/github.com/godror/godror/odpi/src/dpiData.c similarity index 89% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiData.c rename to vendor/github.com/godror/godror/odpi/src/dpiData.c index dc669f4bc7fc..57d3faac309b 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiData.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiData.c @@ -46,6 +46,36 @@ int dpiDataBuffer__fromOracleDate(dpiDataBuffer *data, } +//----------------------------------------------------------------------------- +// dpiDataBuffer__fromOracleDateAsDouble() [INTERNAL] +// Populate the data from an dpiOciDate structure as a double value (number +// of milliseconds since January 1, 1970). +//----------------------------------------------------------------------------- +int dpiDataBuffer__fromOracleDateAsDouble(dpiDataBuffer *data, + dpiEnv *env, dpiError *error, dpiOciDate *oracleValue) +{ + void *timestamp; + int status; + + // allocate and populate a timestamp with the value of the date + if (dpiOci__descriptorAlloc(env->handle, ×tamp, + DPI_OCI_DTYPE_TIMESTAMP_LTZ, "alloc timestamp", error) < 0) + return DPI_FAILURE; + if (dpiOci__dateTimeConstruct(env->handle, timestamp, oracleValue->year, + oracleValue->month, oracleValue->day, oracleValue->hour, + oracleValue->minute, oracleValue->second, 0, NULL, 0, error) < 0) { + dpiOci__descriptorFree(timestamp, DPI_OCI_DTYPE_TIMESTAMP_LTZ); + return DPI_FAILURE; + } + + // now calculate the number of milliseconds since January 1, 1970 + status = dpiDataBuffer__fromOracleTimestampAsDouble(data, env, error, + timestamp); + dpiOci__descriptorFree(timestamp, DPI_OCI_DTYPE_TIMESTAMP_LTZ); + return status; +} + + //----------------------------------------------------------------------------- // dpiDataBuffer__fromOracleIntervalDS() [INTERNAL] // Populate the data from an OCIInterval structure (days/seconds). @@ -305,6 +335,58 @@ int dpiDataBuffer__toOracleDate(dpiDataBuffer *data, dpiOciDate *oracleValue) } +//----------------------------------------------------------------------------- +// dpiDataBuffer__toOracleDateFromDouble() [INTERNAL] +// Populate the data in an dpiOciDate structure given a double (number of +// milliseconds since January 1, 1970). +//----------------------------------------------------------------------------- +int dpiDataBuffer__toOracleDateFromDouble(dpiDataBuffer *data, dpiEnv *env, + dpiError *error, dpiOciDate *oracleValue) +{ + void *timestamp, *timestampLTZ; + uint32_t fsecond; + + // allocate a descriptor to acquire a timestamp + if (dpiOci__descriptorAlloc(env->handle, ×tampLTZ, + DPI_OCI_DTYPE_TIMESTAMP_LTZ, "alloc timestamp", error) < 0) + return DPI_FAILURE; + if (dpiDataBuffer__toOracleTimestampFromDouble(data, env, error, + timestampLTZ) < 0) { + dpiOci__descriptorFree(timestampLTZ, DPI_OCI_DTYPE_TIMESTAMP_LTZ); + return DPI_FAILURE; + } + + // allocate a plain timestamp and convert to it + if (dpiOci__descriptorAlloc(env->handle, ×tamp, + DPI_OCI_DTYPE_TIMESTAMP, "alloc plain timestamp", error) < 0) { + dpiOci__descriptorFree(timestampLTZ, DPI_OCI_DTYPE_TIMESTAMP_LTZ); + return DPI_FAILURE; + } + if (dpiOci__dateTimeConvert(env->handle, timestampLTZ, timestamp, + error) < 0) { + dpiOci__descriptorFree(timestamp, DPI_OCI_DTYPE_TIMESTAMP); + dpiOci__descriptorFree(timestampLTZ, DPI_OCI_DTYPE_TIMESTAMP_LTZ); + return DPI_FAILURE; + } + dpiOci__descriptorFree(timestampLTZ, DPI_OCI_DTYPE_TIMESTAMP_LTZ); + + // populate date structure + if (dpiOci__dateTimeGetDate(env->handle, timestamp, &oracleValue->year, + &oracleValue->month, &oracleValue->day, error) < 0) { + dpiOci__descriptorFree(timestamp, DPI_OCI_DTYPE_TIMESTAMP); + return DPI_FAILURE; + } + if (dpiOci__dateTimeGetTime(env->handle, timestamp, &oracleValue->hour, + &oracleValue->minute, &oracleValue->second, &fsecond, error) < 0) { + dpiOci__descriptorFree(timestamp, DPI_OCI_DTYPE_TIMESTAMP); + return DPI_FAILURE; + } + + dpiOci__descriptorFree(timestamp, DPI_OCI_DTYPE_TIMESTAMP); + return DPI_SUCCESS; +} + + //----------------------------------------------------------------------------- // dpiDataBuffer__toOracleIntervalDS() [INTERNAL] // Populate the data in an OCIInterval structure (days/seconds). @@ -815,4 +897,3 @@ void dpiData_setUint64(dpiData *data, uint64_t value) data->isNull = 0; data->value.asUint64 = value; } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiDebug.c b/vendor/github.com/godror/godror/odpi/src/dpiDebug.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiDebug.c rename to vendor/github.com/godror/godror/odpi/src/dpiDebug.c index 6188d7d84a7d..28b8776475f5 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiDebug.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiDebug.c @@ -181,4 +181,3 @@ void dpiDebug__print(const char *format, ...) (void) vfprintf(dpiDebugStream, formatWithPrefix, varArgs); va_end(varArgs); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiDeqOptions.c b/vendor/github.com/godror/godror/odpi/src/dpiDeqOptions.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiDeqOptions.c rename to vendor/github.com/godror/godror/odpi/src/dpiDeqOptions.c index 309eb05e5a63..35cb84727ae5 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiDeqOptions.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiDeqOptions.c @@ -60,7 +60,7 @@ static int dpiDeqOptions__getAttrValue(dpiDeqOptions *options, dpiError error; int status; - if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, fnName, 1, + if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, fnName, &error) < 0) return dpiGen__endPublicFn(options, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(options, value) @@ -82,7 +82,7 @@ static int dpiDeqOptions__setAttrValue(dpiDeqOptions *options, dpiError error; int status; - if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, fnName, 1, + if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, fnName, &error) < 0) return dpiGen__endPublicFn(options, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(options, value) @@ -162,7 +162,7 @@ int dpiDeqOptions_getMsgId(dpiDeqOptions *options, const char **value, dpiError error; void *rawValue; - if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, __func__, 1, + if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, __func__, &error) < 0) return dpiGen__endPublicFn(options, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(options, value) @@ -309,7 +309,7 @@ int dpiDeqOptions_setMsgId(dpiDeqOptions *options, const char *value, dpiError error; int status; - if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, __func__, 1, + if (dpiGen__startPublicFn(options, DPI_HTYPE_DEQ_OPTIONS, __func__, &error) < 0) return dpiGen__endPublicFn(options, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(options, value) @@ -367,4 +367,3 @@ int dpiDeqOptions_setWait(dpiDeqOptions *options, uint32_t value) return dpiDeqOptions__setAttrValue(options, DPI_OCI_ATTR_WAIT, __func__, &value, 0); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiEnqOptions.c b/vendor/github.com/godror/godror/odpi/src/dpiEnqOptions.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiEnqOptions.c rename to vendor/github.com/godror/godror/odpi/src/dpiEnqOptions.c index 8b073188c33c..24bf60a3229a 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiEnqOptions.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiEnqOptions.c @@ -60,7 +60,7 @@ static int dpiEnqOptions__getAttrValue(dpiEnqOptions *options, dpiError error; int status; - if (dpiGen__startPublicFn(options, DPI_HTYPE_ENQ_OPTIONS, fnName, 1, + if (dpiGen__startPublicFn(options, DPI_HTYPE_ENQ_OPTIONS, fnName, &error) < 0) return dpiGen__endPublicFn(options, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(options, value) @@ -82,7 +82,7 @@ static int dpiEnqOptions__setAttrValue(dpiEnqOptions *options, dpiError error; int status; - if (dpiGen__startPublicFn(options, DPI_HTYPE_ENQ_OPTIONS, fnName, 1, + if (dpiGen__startPublicFn(options, DPI_HTYPE_ENQ_OPTIONS, fnName, &error) < 0) return dpiGen__endPublicFn(options, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(options, value) @@ -171,4 +171,3 @@ int dpiEnqOptions_setVisibility(dpiEnqOptions *options, dpiVisibility value) return dpiEnqOptions__setAttrValue(options, DPI_OCI_ATTR_VISIBILITY, __func__, &value, 0); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiEnv.c b/vendor/github.com/godror/godror/odpi/src/dpiEnv.c similarity index 68% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiEnv.c rename to vendor/github.com/godror/godror/odpi/src/dpiEnv.c index e7dff7730a29..c1a2c3f7d615 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiEnv.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiEnv.c @@ -24,7 +24,7 @@ void dpiEnv__free(dpiEnv *env, dpiError *error) { if (env->threaded) dpiMutex__destroy(env->mutex); - if (env->handle) { + if (env->handle && !env->externalHandle) { dpiOci__handleFree(env->handle, DPI_OCI_HTYPE_ENV); env->handle = NULL; } @@ -67,52 +67,68 @@ int dpiEnv__getEncodingInfo(dpiEnv *env, dpiEncodingInfo *info) //----------------------------------------------------------------------------- // dpiEnv__init() [INTERNAL] -// Initialize the environment structure by creating the OCI environment and -// populating information about the environment. +// Initialize the environment structure. If an external handle is provided it +// is used directly; otherwise, a new OCI environment handle is created. In +// either case, information about the environment is stored for later use. //----------------------------------------------------------------------------- int dpiEnv__init(dpiEnv *env, const dpiContext *context, - const dpiCommonCreateParams *params, dpiError *error) + const dpiCommonCreateParams *params, void *externalHandle, + dpiError *error) { char timezoneBuffer[20]; size_t timezoneLength; - // lookup encoding - if (params->encoding && dpiGlobal__lookupCharSet(params->encoding, - &env->charsetId, error) < 0) - return DPI_FAILURE; + // store context and version information + env->context = context; + env->versionInfo = context->versionInfo; - // check for identical encoding before performing lookup - if (params->nencoding && params->encoding && - strcmp(params->nencoding, params->encoding) == 0) - env->ncharsetId = env->charsetId; - else if (params->nencoding && dpiGlobal__lookupCharSet(params->nencoding, - &env->ncharsetId, error) < 0) - return DPI_FAILURE; + // an external handle is available, use it directly + if (externalHandle) { + env->handle = externalHandle; + env->externalHandle = 1; - // both charsetId and ncharsetId must be zero or both must be non-zero - // use NLS routine to look up missing value, if needed - if (env->charsetId && !env->ncharsetId) { - if (dpiOci__nlsEnvironmentVariableGet(DPI_OCI_NLS_NCHARSET_ID, - &env->ncharsetId, error) < 0) - return DPI_FAILURE; - } else if (!env->charsetId && env->ncharsetId) { - if (dpiOci__nlsEnvironmentVariableGet(DPI_OCI_NLS_CHARSET_ID, + // otherwise, lookup encodings + } else { + + // lookup encoding + if (params->encoding && dpiGlobal__lookupCharSet(params->encoding, &env->charsetId, error) < 0) return DPI_FAILURE; - } - // create the new environment handle - env->context = context; - env->versionInfo = context->versionInfo; - if (dpiOci__envNlsCreate(&env->handle, params->createMode | DPI_OCI_OBJECT, - env->charsetId, env->ncharsetId, error) < 0) - return DPI_FAILURE; + // check for identical encoding before performing lookup of national + // character set encoding + if (params->nencoding && params->encoding && + strcmp(params->nencoding, params->encoding) == 0) + env->ncharsetId = env->charsetId; + else if (params->nencoding && + dpiGlobal__lookupCharSet(params->nencoding, + &env->ncharsetId, error) < 0) + return DPI_FAILURE; + + // both charsetId and ncharsetId must be zero or both must be non-zero + // use NLS routine to look up missing value, if needed + if (env->charsetId && !env->ncharsetId) { + if (dpiOci__nlsEnvironmentVariableGet(DPI_OCI_NLS_NCHARSET_ID, + &env->ncharsetId, error) < 0) + return DPI_FAILURE; + } else if (!env->charsetId && env->ncharsetId) { + if (dpiOci__nlsEnvironmentVariableGet(DPI_OCI_NLS_CHARSET_ID, + &env->charsetId, error) < 0) + return DPI_FAILURE; + } + + // create new environment handle + if (dpiOci__envNlsCreate(&env->handle, + params->createMode | DPI_OCI_OBJECT, + env->charsetId, env->ncharsetId, error) < 0) + return DPI_FAILURE; + + } - // create the error handle pool and acquire the first error handle + // create the error handle pool if (dpiHandlePool__create(&env->errorHandles, error) < 0) return DPI_FAILURE; - if (dpiEnv__initError(env, error) < 0) - return DPI_FAILURE; + error->env = env; // if threaded, create mutex for reference counts if (params->createMode & DPI_OCI_THREADED) @@ -162,28 +178,3 @@ int dpiEnv__init(dpiEnv *env, const dpiContext *context, return DPI_SUCCESS; } - - -//----------------------------------------------------------------------------- -// dpiEnv__initError() [INTERNAL] -// Retrieve the OCI error handle to use for error handling, from a pool of -// error handles common to the environment handle. The environment that was -// used to create the error handle is stored in the error structure so that -// the encoding and character set can be retrieved in the event of an OCI -// error (which uses the CHAR encoding of the environment). -//----------------------------------------------------------------------------- -int dpiEnv__initError(dpiEnv *env, dpiError *error) -{ - error->env = env; - if (dpiHandlePool__acquire(env->errorHandles, &error->handle, error) < 0) - return DPI_FAILURE; - - if (!error->handle) { - if (dpiOci__handleAlloc(env->handle, &error->handle, - DPI_OCI_HTYPE_ERROR, "allocate OCI error", error) < 0) - return DPI_FAILURE; - } - - return DPI_SUCCESS; -} - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiError.c b/vendor/github.com/godror/godror/odpi/src/dpiError.c similarity index 85% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiError.c rename to vendor/github.com/godror/godror/odpi/src/dpiError.c index c76070562cbf..aac57b92f183 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiError.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiError.c @@ -1,5 +1,5 @@ //----------------------------------------------------------------------------- -// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved. // This program is free software: you can modify it and/or redistribute it // under the terms of: // @@ -18,21 +18,125 @@ #include "dpiErrorMessages.h" //----------------------------------------------------------------------------- -// dpiError__check() [INTERNAL] -// Checks to see if the status of the last call resulted in an error -// condition. If so, the error is populated. Note that trailing newlines and -// spaces are truncated from the message if they exist. If the connection is -// not NULL a check is made to see if the connection is no longer viable. +// dpiError__getInfo() [INTERNAL] +// Get the error state from the error structure. Returns DPI_FAILURE as a +// convenience to the caller. +//----------------------------------------------------------------------------- +int dpiError__getInfo(dpiError *error, dpiErrorInfo *info) +{ + if (!info) + return DPI_FAILURE; + info->code = error->buffer->code; + info->offset = error->buffer->offset; + info->message = error->buffer->message; + info->messageLength = error->buffer->messageLength; + info->fnName = error->buffer->fnName; + info->action = error->buffer->action; + info->isRecoverable = error->buffer->isRecoverable; + info->encoding = error->buffer->encoding; + switch(info->code) { + case 12154: // TNS:could not resolve the connect identifier specified + info->sqlState = "42S02"; + break; + case 22: // invalid session ID; access denied + case 378: // buffer pools cannot be created as specified + case 602: // Internal programming exception + case 603: // ORACLE server session terminated by fatal error + case 604: // error occurred at recursive SQL level + case 609: // could not attach to incoming connection + case 1012: // not logged on + case 1033: // ORACLE initialization or shutdown in progress + case 1041: // internal error. hostdef extension doesn't exist + case 1043: // user side memory corruption + case 1089: // immediate shutdown or close in progress + case 1090: // shutdown in progress + case 1092: // ORACLE instance terminated. Disconnection forced + case 3113: // end-of-file on communication channel + case 3114: // not connected to ORACLE + case 3122: // attempt to close ORACLE-side window on user side + case 3135: // connection lost contact + case 12153: // TNS:not connected + case 27146: // post/wait initialization failed + case 28511: // lost RPC connection to heterogeneous remote agent + info->sqlState = "01002"; + break; + default: + if (error->buffer->code == 0 && + error->buffer->errorNum == (dpiErrorNum) 0) + info->sqlState = "00000"; + else info->sqlState = "HY000"; + break; + } + return DPI_FAILURE; +} + + +//----------------------------------------------------------------------------- +// dpiError__initHandle() [INTERNAL] +// Retrieve the OCI error handle to use for error handling, from a pool of +// error handles common to the environment handle stored on the error. This +// environment also controls the encoding of OCI errors (which uses the CHAR +// encoding of the environment). +//----------------------------------------------------------------------------- +int dpiError__initHandle(dpiError *error) +{ + if (dpiHandlePool__acquire(error->env->errorHandles, &error->handle, + error) < 0) + return DPI_FAILURE; + if (!error->handle) { + if (dpiOci__handleAlloc(error->env->handle, &error->handle, + DPI_OCI_HTYPE_ERROR, "allocate OCI error", error) < 0) + return DPI_FAILURE; + } + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiError__set() [INTERNAL] +// Set the error buffer to the specified DPI error. Returns DPI_FAILURE as a +// convenience to the caller. +//----------------------------------------------------------------------------- +int dpiError__set(dpiError *error, const char *action, dpiErrorNum errorNum, + ...) +{ + va_list varArgs; + + if (error) { + error->buffer->code = 0; + error->buffer->isRecoverable = 0; + error->buffer->offset = 0; + strcpy(error->buffer->encoding, DPI_CHARSET_NAME_UTF8); + error->buffer->action = action; + error->buffer->errorNum = errorNum; + va_start(varArgs, errorNum); + error->buffer->messageLength = + (uint32_t) vsnprintf(error->buffer->message, + sizeof(error->buffer->message), + dpiErrorMessages[errorNum - DPI_ERR_NO_ERR], varArgs); + va_end(varArgs); + if (dpiDebugLevel & DPI_DEBUG_LEVEL_ERRORS) + dpiDebug__print("internal error %.*s (%s / %s)\n", + error->buffer->messageLength, error->buffer->message, + error->buffer->fnName, action); + } + return DPI_FAILURE; +} + + +//----------------------------------------------------------------------------- +// dpiError__setFromOCI() [INTERNAL] +// Called when an OCI error has occurred and sets the error structure with +// the contents of that error. Note that trailing newlines and spaces are +// truncated from the message if they exist. If the connection is not NULL a +// check is made to see if the connection is no longer viable. The value +// DPI_FAILURE is returned as a convenience to the caller. //----------------------------------------------------------------------------- -int dpiError__check(dpiError *error, int status, dpiConn *conn, +int dpiError__setFromOCI(dpiError *error, int status, dpiConn *conn, const char *action) { uint32_t callTimeout; - // no error has taken place - if (status == DPI_OCI_SUCCESS || status == DPI_OCI_SUCCESS_WITH_INFO) - return DPI_SUCCESS; - // special error cases if (status == DPI_OCI_INVALID_HANDLE) return dpiError__set(error, action, DPI_ERR_INVALID_HANDLE, "OCI"); @@ -98,6 +202,7 @@ int dpiError__check(dpiError *error, int status, dpiConn *conn, conn->deadSession = 1; break; case 3136: // inbound connection timed out + case 3156: // OCI call timed out case 12161: // TNS:internal error: partial data received callTimeout = 0; if (conn->env->versionInfo->versionNum >= 18) @@ -115,90 +220,3 @@ int dpiError__check(dpiError *error, int status, dpiConn *conn, return DPI_FAILURE; } - - -//----------------------------------------------------------------------------- -// dpiError__getInfo() [INTERNAL] -// Get the error state from the error structure. Returns DPI_FAILURE as a -// convenience to the caller. -//----------------------------------------------------------------------------- -int dpiError__getInfo(dpiError *error, dpiErrorInfo *info) -{ - if (!info) - return DPI_FAILURE; - info->code = error->buffer->code; - info->offset = error->buffer->offset; - info->message = error->buffer->message; - info->messageLength = error->buffer->messageLength; - info->fnName = error->buffer->fnName; - info->action = error->buffer->action; - info->isRecoverable = error->buffer->isRecoverable; - info->encoding = error->buffer->encoding; - switch(info->code) { - case 12154: // TNS:could not resolve the connect identifier specified - info->sqlState = "42S02"; - break; - case 22: // invalid session ID; access denied - case 378: // buffer pools cannot be created as specified - case 602: // Internal programming exception - case 603: // ORACLE server session terminated by fatal error - case 604: // error occurred at recursive SQL level - case 609: // could not attach to incoming connection - case 1012: // not logged on - case 1033: // ORACLE initialization or shutdown in progress - case 1041: // internal error. hostdef extension doesn't exist - case 1043: // user side memory corruption - case 1089: // immediate shutdown or close in progress - case 1090: // shutdown in progress - case 1092: // ORACLE instance terminated. Disconnection forced - case 3113: // end-of-file on communication channel - case 3114: // not connected to ORACLE - case 3122: // attempt to close ORACLE-side window on user side - case 3135: // connection lost contact - case 12153: // TNS:not connected - case 27146: // post/wait initialization failed - case 28511: // lost RPC connection to heterogeneous remote agent - info->sqlState = "01002"; - break; - default: - if (error->buffer->code == 0 && - error->buffer->errorNum == (dpiErrorNum) 0) - info->sqlState = "00000"; - else info->sqlState = "HY000"; - break; - } - return DPI_FAILURE; -} - - -//----------------------------------------------------------------------------- -// dpiError__set() [INTERNAL] -// Set the error buffer to the specified DPI error. Returns DPI_FAILURE as a -// convenience to the caller. -//----------------------------------------------------------------------------- -int dpiError__set(dpiError *error, const char *action, dpiErrorNum errorNum, - ...) -{ - va_list varArgs; - - if (error) { - error->buffer->code = 0; - error->buffer->isRecoverable = 0; - error->buffer->offset = 0; - strcpy(error->buffer->encoding, DPI_CHARSET_NAME_UTF8); - error->buffer->action = action; - error->buffer->errorNum = errorNum; - va_start(varArgs, errorNum); - error->buffer->messageLength = - (uint32_t) vsnprintf(error->buffer->message, - sizeof(error->buffer->message), - dpiErrorMessages[errorNum - DPI_ERR_NO_ERR], varArgs); - va_end(varArgs); - if (dpiDebugLevel & DPI_DEBUG_LEVEL_ERRORS) - dpiDebug__print("internal error %.*s (%s / %s)\n", - error->buffer->messageLength, error->buffer->message, - error->buffer->fnName, action); - } - return DPI_FAILURE; -} - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiErrorMessages.h b/vendor/github.com/godror/godror/odpi/src/dpiErrorMessages.h similarity index 94% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiErrorMessages.h rename to vendor/github.com/godror/godror/odpi/src/dpiErrorMessages.h index fd87de212792..1de52a2b17bc 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiErrorMessages.h +++ b/vendor/github.com/godror/godror/odpi/src/dpiErrorMessages.h @@ -83,5 +83,8 @@ static const char* const dpiErrorMessages[DPI_ERR_MAX - DPI_ERR_NO_ERR] = { "DPI-1067: call timeout of %u ms exceeded with ORA-%d", // DPI_ERR_CALL_TIMEOUT "DPI-1068: SODA cursor was already closed", // DPI_ERR_SODA_CURSOR_CLOSED "DPI-1069: proxy user name must be enclosed in [] when using external authentication", // DPI_ERR_EXT_AUTH_INVALID_PROXY + "DPI-1070: no payload provided in message properties", // DPI_ERR_QUEUE_NO_PAYLOAD + "DPI-1071: payload type in message properties must match the payload type of the queue", // DPI_ERR_QUEUE_WRONG_PAYLOAD_TYPE + "DPI-1072: the Oracle Client library version is unsupported", // DPI_ERR_ORACLE_CLIENT_UNSUPPORTED + "DPI-1073: sharding key is required when specifying a super sharding key", // DPI_ERR_MISSING_SHARDING_KEY }; - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiGen.c b/vendor/github.com/godror/godror/odpi/src/dpiGen.c similarity index 95% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiGen.c rename to vendor/github.com/godror/godror/odpi/src/dpiGen.c index e6b65b9a65bf..f671adf2a21c 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiGen.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiGen.c @@ -133,6 +133,12 @@ static const dpiTypeDef dpiAllTypeDefs[DPI_HTYPE_MAX - DPI_HTYPE_NONE - 1] = { sizeof(dpiSodaDocCursor), // size of structure 0x80ceb83b, // check integer (dpiTypeFreeProc) dpiSodaDocCursor__free + }, + { + "dpiQueue", // name + sizeof(dpiQueue), // size of structure + 0x54904ba2, // check integer + (dpiTypeFreeProc) dpiQueue__free } }; @@ -145,7 +151,7 @@ int dpiGen__addRef(void *ptr, dpiHandleTypeNum typeNum, const char *fnName) { dpiError error; - if (dpiGen__startPublicFn(ptr, typeNum, fnName, 0, &error) < 0) + if (dpiGen__startPublicFn(ptr, typeNum, fnName, &error) < 0) return dpiGen__endPublicFn(ptr, DPI_FAILURE, &error); dpiGen__setRefCount(ptr, &error, 1); return dpiGen__endPublicFn(ptr, DPI_SUCCESS, &error); @@ -218,7 +224,7 @@ int dpiGen__endPublicFn(const void *ptr, int returnValue, dpiError *error) dpiDebug__print("fn end %s(%p) -> %d\n", error->buffer->fnName, ptr, returnValue); if (error->handle) - dpiHandlePool__release(error->env->errorHandles, error->handle, error); + dpiHandlePool__release(error->env->errorHandles, &error->handle); return returnValue; } @@ -235,7 +241,7 @@ int dpiGen__release(void *ptr, dpiHandleTypeNum typeNum, const char *fnName) { dpiError error; - if (dpiGen__startPublicFn(ptr, typeNum, fnName, 1, &error) < 0) + if (dpiGen__startPublicFn(ptr, typeNum, fnName, &error) < 0) return dpiGen__endPublicFn(ptr, DPI_FAILURE, &error); dpiGen__setRefCount(ptr, &error, -1); return dpiGen__endPublicFn(ptr, DPI_SUCCESS, &error); @@ -286,7 +292,7 @@ void dpiGen__setRefCount(void *ptr, dpiError *error, int increment) // all subsequent calls. //----------------------------------------------------------------------------- int dpiGen__startPublicFn(const void *ptr, dpiHandleTypeNum typeNum, - const char *fnName, int needErrorHandle, dpiError *error) + const char *fnName, dpiError *error) { dpiBaseType *value = (dpiBaseType*) ptr; @@ -296,8 +302,6 @@ int dpiGen__startPublicFn(const void *ptr, dpiHandleTypeNum typeNum, return DPI_FAILURE; if (dpiGen__checkHandle(ptr, typeNum, "check main handle", error) < 0) return DPI_FAILURE; - if (needErrorHandle && dpiEnv__initError(value->env, error) < 0) - return DPI_FAILURE; + error->env = value->env; return DPI_SUCCESS; } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiGlobal.c b/vendor/github.com/godror/godror/odpi/src/dpiGlobal.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiGlobal.c rename to vendor/github.com/godror/godror/odpi/src/dpiGlobal.c index c57dd557848b..6599d8704210 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiGlobal.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiGlobal.c @@ -289,4 +289,3 @@ int dpiGlobal__lookupEncoding(uint16_t charsetId, char *encoding, return DPI_SUCCESS; } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiHandleList.c b/vendor/github.com/godror/godror/odpi/src/dpiHandleList.c similarity index 98% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiHandleList.c rename to vendor/github.com/godror/godror/odpi/src/dpiHandleList.c index 920a2c628fe2..2f3864067bb0 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiHandleList.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiHandleList.c @@ -43,7 +43,7 @@ int dpiHandleList__addHandle(dpiHandleList *list, void *handle, list->handles = tempHandles; list->numSlots = numSlots; *slotNum = list->numUsedSlots++; - list->currentPos = list->numUsedSlots + 1; + list->currentPos = list->numUsedSlots; } else { for (i = 0; i < list->numSlots; i++) { if (!list->handles[list->currentPos]) @@ -114,4 +114,3 @@ void dpiHandleList__removeHandle(dpiHandleList *list, uint32_t slotNum) list->numUsedSlots--; dpiMutex__release(list->mutex); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiHandlePool.c b/vendor/github.com/godror/godror/odpi/src/dpiHandlePool.c similarity index 94% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiHandlePool.c rename to vendor/github.com/godror/godror/odpi/src/dpiHandlePool.c index aab33c86f2f9..456f094f136b 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiHandlePool.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiHandlePool.c @@ -105,14 +105,15 @@ void dpiHandlePool__free(dpiHandlePool *pool) // dpiHandlePool__release() [INTERNAL] // Release a handle back to the pool. No checks are performed on the handle // that is being returned to the pool; It will simply be placed back in the -// pool. +// pool. The handle is then NULLed in order to avoid multiple attempts to +// release the handle back to the pool. //----------------------------------------------------------------------------- -void dpiHandlePool__release(dpiHandlePool *pool, void *handle, dpiError *error) +void dpiHandlePool__release(dpiHandlePool *pool, void **handle) { dpiMutex__acquire(pool->mutex); - pool->handles[pool->releasePos++] = handle; + pool->handles[pool->releasePos++] = *handle; + *handle = NULL; if (pool->releasePos == pool->numSlots) pool->releasePos = 0; dpiMutex__release(pool->mutex); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiImpl.h b/vendor/github.com/godror/godror/odpi/src/dpiImpl.h similarity index 92% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiImpl.h rename to vendor/github.com/godror/godror/odpi/src/dpiImpl.h index 101846dc1da8..ec2ede139554 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiImpl.h +++ b/vendor/github.com/godror/godror/odpi/src/dpiImpl.h @@ -153,6 +153,7 @@ extern unsigned long dpiDebugLevel; // define values used for getting/setting OCI attributes #define DPI_OCI_ATTR_DATA_SIZE 1 #define DPI_OCI_ATTR_DATA_TYPE 2 +#define DPI_OCI_ATTR_ENV 5 #define DPI_OCI_ATTR_PRECISION 5 #define DPI_OCI_ATTR_SCALE 6 #define DPI_OCI_ATTR_NAME 4 @@ -165,6 +166,7 @@ extern unsigned long dpiDebugLevel; #define DPI_OCI_ATTR_ROW_COUNT 9 #define DPI_OCI_ATTR_PREFETCH_ROWS 11 #define DPI_OCI_ATTR_PARAM_COUNT 18 +#define DPI_OCI_ATTR_ROWID 19 #define DPI_OCI_ATTR_USERNAME 22 #define DPI_OCI_ATTR_PASSWORD 23 #define DPI_OCI_ATTR_STMT_TYPE 24 @@ -219,6 +221,7 @@ extern unsigned long dpiDebugLevel; #define DPI_OCI_ATTR_NUM_TYPE_ATTRS 228 #define DPI_OCI_ATTR_SUBSCR_CQ_QOSFLAGS 229 #define DPI_OCI_ATTR_LIST_TYPE_ATTRS 229 +#define DPI_OCI_ATTR_SUBSCR_CQ_REGID 230 #define DPI_OCI_ATTR_SUBSCR_NTFN_GROUPING_CLASS 231 #define DPI_OCI_ATTR_SUBSCR_NTFN_GROUPING_VALUE 232 #define DPI_OCI_ATTR_SUBSCR_NTFN_GROUPING_TYPE 233 @@ -263,6 +266,7 @@ extern unsigned long dpiDebugLevel; #define DPI_OCI_ATTR_CONNECTION_CLASS 425 #define DPI_OCI_ATTR_PURITY 426 #define DPI_OCI_ATTR_RECEIVE_TIMEOUT 436 +#define DPI_OCI_ATTR_LOBPREFETCH_LENGTH 440 #define DPI_OCI_ATTR_SUBSCR_IPADDR 452 #define DPI_OCI_ATTR_UB8_ROW_COUNT 457 #define DPI_OCI_ATTR_SPOOL_AUTH 460 @@ -294,6 +298,7 @@ extern unsigned long dpiDebugLevel; #define DPI_OCI_ATTR_SODA_SKIP 577 #define DPI_OCI_ATTR_SODA_LIMIT 578 #define DPI_OCI_ATTR_SODA_DOC_COUNT 593 +#define DPI_OCI_ATTR_SPOOL_MAX_PER_SHARD 602 // define OCI object type constants #define DPI_OCI_OTYPE_NAME 1 @@ -335,6 +340,11 @@ extern unsigned long dpiDebugLevel; #define DPI_OCI_TYPECODE_SMALLINT 246 #define DPI_SQLT_REC 250 #define DPI_SQLT_BOL 252 +#define DPI_OCI_TYPECODE_ROWID 262 +#define DPI_OCI_TYPECODE_LONG 263 +#define DPI_OCI_TYPECODE_LONG_RAW 264 +#define DPI_OCI_TYPECODE_BINARY_INTEGER 265 +#define DPI_OCI_TYPECODE_PLS_INTEGER 266 // define session pool constants #define DPI_OCI_SPD_FORCE 0x0001 @@ -428,6 +438,7 @@ extern unsigned long dpiDebugLevel; #define DPI_OCI_SODA_COLL_CREATE_MAP 0x00010000 #define DPI_OCI_SODA_INDEX_DROP_FORCE 0x00010000 #define DPI_OCI_TRANS_TWOPHASE 0x01000000 +#define DPI_OCI_SECURE_NOTIFICATION 0x20000000 //----------------------------------------------------------------------------- // Macros @@ -519,6 +530,10 @@ typedef enum { DPI_ERR_CALL_TIMEOUT, DPI_ERR_SODA_CURSOR_CLOSED, DPI_ERR_EXT_AUTH_INVALID_PROXY, + DPI_ERR_QUEUE_NO_PAYLOAD, + DPI_ERR_QUEUE_WRONG_PAYLOAD_TYPE, + DPI_ERR_ORACLE_CLIENT_UNSUPPORTED, + DPI_ERR_MISSING_SHARDING_KEY, DPI_ERR_MAX } dpiErrorNum; @@ -544,6 +559,7 @@ typedef enum { DPI_HTYPE_SODA_DB, DPI_HTYPE_SODA_DOC, DPI_HTYPE_SODA_DOC_CURSOR, + DPI_HTYPE_QUEUE, DPI_HTYPE_MAX } dpiHandleTypeNum; @@ -587,6 +603,25 @@ typedef struct { uint32_t maxLifetimeSession; } dpiPoolCreateParams__v30; +// structure used for creating pools (3.2) +typedef struct { + uint32_t minSessions; + uint32_t maxSessions; + uint32_t sessionIncrement; + int pingInterval; + int pingTimeout; + int homogeneous; + int externalAuth; + dpiPoolGetMode getMode; + const char *outPoolName; + uint32_t outPoolNameLength; + uint32_t timeout; + uint32_t waitTimeout; + uint32_t maxLifetimeSession; + const char *plsqlFixupCallback; + uint32_t plsqlFixupCallbackLength; +} dpiPoolCreateParams__v32; + // structure used for creating connections (3.0) typedef struct { dpiAuthMode authMode; @@ -612,6 +647,49 @@ typedef struct { uint8_t numSuperShardingKeyColumns; } dpiConnCreateParams__v30; +// structure used for creating subscriptions (3.0 and 3.1) +typedef struct { + dpiSubscrNamespace subscrNamespace; + dpiSubscrProtocol protocol; + dpiSubscrQOS qos; + dpiOpCode operations; + uint32_t portNumber; + uint32_t timeout; + const char *name; + uint32_t nameLength; + dpiSubscrCallback callback; + void *callbackContext; + const char *recipientName; + uint32_t recipientNameLength; + const char *ipAddress; + uint32_t ipAddressLength; + uint8_t groupingClass; + uint32_t groupingValue; + uint8_t groupingType; +} dpiSubscrCreateParams__v30; + +// structure used for creating subscriptions (3.2) +typedef struct { + dpiSubscrNamespace subscrNamespace; + dpiSubscrProtocol protocol; + dpiSubscrQOS qos; + dpiOpCode operations; + uint32_t portNumber; + uint32_t timeout; + const char *name; + uint32_t nameLength; + dpiSubscrCallback callback; + void *callbackContext; + const char *recipientName; + uint32_t recipientNameLength; + const char *ipAddress; + uint32_t ipAddressLength; + uint8_t groupingClass; + uint32_t groupingValue; + uint8_t groupingType; + uint64_t outRegId; +} dpiSubscrCreateParams__v32; + //----------------------------------------------------------------------------- // OCI type definitions @@ -632,6 +710,17 @@ typedef struct { uint8_t second; } dpiOciDate; +// alternative representation of OCI Date type used for sharding +typedef struct { + uint8_t century; + uint8_t year; + uint8_t month; + uint8_t day; + uint8_t hour; + uint8_t minute; + uint8_t second; +} dpiShardingOciDate; + // representation of OCI XID type (two-phase commit) typedef struct { long formatID; @@ -707,6 +796,7 @@ typedef struct { void *baseDate; // midnight, January 1, 1970 int threaded; // threaded mode enabled? int events; // events mode enabled? + int externalHandle; // external handle? } dpiEnv; // used to manage all errors that take place in the library; the implementation @@ -816,6 +906,7 @@ typedef union { char *asBytes; float *asFloat; double *asDouble; + int32_t *asInt32; int64_t *asInt64; uint64_t *asUint64; dpiOciNumber *asNumber; @@ -836,6 +927,7 @@ typedef union { // buffers to Oracle when values are being transferred to or from the Oracle // database typedef union { + int32_t asInt32; int64_t asInt64; uint64_t asUint64; float asFloat; @@ -869,6 +961,18 @@ typedef struct { dpiOracleData data; // Oracle data buffers (internal only) } dpiVarBuffer; +// represents memory areas used for enqueuing and dequeuing messages from +// queues +typedef struct { + uint32_t numElements; // number of elements in next arrays + dpiMsgProps **props; // array of dpiMsgProps handles + void **handles; // array of OCI msg prop handles + void **instances; // array of instances + void **indicators; // array of indicators + int16_t *rawIndicators; // array of indicators (RAW queues) + void **msgIds; // array of OCI message ids +} dpiQueueBuffer; + //----------------------------------------------------------------------------- // External implementation type definitions @@ -900,8 +1004,11 @@ struct dpiConn { void *handle; // OCI service context handle void *serverHandle; // OCI server handle void *sessionHandle; // OCI session handle + void *shardingKey; // OCI sharding key descriptor + void *superShardingKey; // OCI supper sharding key descriptor const char *releaseString; // cached release string or NULL uint32_t releaseStringLength; // cached release string length or 0 + void *rawTDO; // cached RAW TDO dpiVersionInfo versionInfo; // Oracle database version info uint32_t commitMode; // commit mode (for two-phase commits) uint16_t charsetId; // database character set ID @@ -932,6 +1039,7 @@ struct dpiStmt { dpiConn *conn; // connection which created this uint32_t openSlotNum; // slot in connection handle list void *handle; // OCI statement handle + dpiStmt *parentStmt; // parent statement (implicit results) uint32_t fetchArraySize; // rows to fetch each time uint32_t bufferRowCount; // number of rows in fetch buffers uint32_t bufferRowIndex; // index into buffers for current row @@ -946,6 +1054,7 @@ struct dpiStmt { uint64_t rowCount; // rows affected or rows fetched so far uint64_t bufferMinRow; // row num of first row in buffers uint16_t statementType; // type of statement + dpiRowid *lastRowid; // rowid of last affected row int isOwned; // owned by structure? int hasRowsToFetch; // potentially more rows to fetch? int scrollable; // scrollable cursor? @@ -1047,10 +1156,12 @@ struct dpiSubscr { dpiType_HEAD dpiConn *conn; // connection which created this void *handle; // OCI subscription handle + dpiMutexType mutex; // enables thread safety dpiSubscrNamespace subscrNamespace; // OCI namespace dpiSubscrQOS qos; // quality of service flags dpiSubscrCallback callback; // callback when event is propagated void *callbackContext; // context pointer for callback + int clientInitiated; // client initiated? int registered; // registered with database? }; @@ -1072,15 +1183,16 @@ struct dpiEnqOptions { void *handle; // OCI enqueue options handle }; -// represents the available properties for message when using advanced queuing +// represents the available properties for messages when using advanced queuing // and is exposed publicly as a handle of type DPI_HTYPE_MSG_PROPS; the // implementation for this is found in the file dpiMsgProps.c struct dpiMsgProps { dpiType_HEAD dpiConn *conn; // connection which created this void *handle; // OCI message properties handle - char *buffer; // latest message ID en/dequeued - uint32_t bufferLength; // size of allocated buffer + dpiObject *payloadObj; // payload (object) + void *payloadRaw; // payload (RAW) + void *msgIdRaw; // message ID (RAW) }; // represents SODA collections and is exposed publicly as a handle of type @@ -1129,6 +1241,19 @@ struct dpiSodaDocCursor { void *handle; // OCI SODA document cursor handle }; +// represents a queue used in AQ (advanced queuing) and is exposed publicly as +// a handle of type DPI_HTYPE_QUEUE; the implementation for this is found in +// the file dpiQueue.c +struct dpiQueue { + dpiType_HEAD + dpiConn *conn; // connection which created this + const char *name; // name of the queue (NULL-terminated) + dpiObjectType *payloadType; // object type (for object payloads) + dpiDeqOptions *deqOptions; // dequeue options + dpiEnqOptions *enqOptions; // enqueue options + dpiQueueBuffer buffer; // buffer area +}; + //----------------------------------------------------------------------------- // definition of internal dpiContext methods @@ -1145,6 +1270,8 @@ void dpiContext__initSubscrCreateParams(dpiSubscrCreateParams *params); //----------------------------------------------------------------------------- int dpiDataBuffer__fromOracleDate(dpiDataBuffer *data, dpiOciDate *oracleValue); +int dpiDataBuffer__fromOracleDateAsDouble(dpiDataBuffer *data, + dpiEnv *env, dpiError *error, dpiOciDate *oracleValue); int dpiDataBuffer__fromOracleIntervalDS(dpiDataBuffer *data, dpiEnv *env, dpiError *error, void *oracleValue); int dpiDataBuffer__fromOracleIntervalYM(dpiDataBuffer *data, dpiEnv *env, @@ -1162,6 +1289,8 @@ int dpiDataBuffer__fromOracleTimestamp(dpiDataBuffer *data, dpiEnv *env, int dpiDataBuffer__fromOracleTimestampAsDouble(dpiDataBuffer *data, dpiEnv *env, dpiError *error, void *oracleValue); int dpiDataBuffer__toOracleDate(dpiDataBuffer *data, dpiOciDate *oracleValue); +int dpiDataBuffer__toOracleDateFromDouble(dpiDataBuffer *data, dpiEnv *env, + dpiError *error, dpiOciDate *oracleValue); int dpiDataBuffer__toOracleIntervalDS(dpiDataBuffer *data, dpiEnv *env, dpiError *error, void *oracleValue); int dpiDataBuffer__toOracleIntervalYM(dpiDataBuffer *data, dpiEnv *env, @@ -1185,19 +1314,20 @@ int dpiDataBuffer__toOracleTimestampFromDouble(dpiDataBuffer *data, //----------------------------------------------------------------------------- void dpiEnv__free(dpiEnv *env, dpiError *error); int dpiEnv__init(dpiEnv *env, const dpiContext *context, - const dpiCommonCreateParams *params, dpiError *error); + const dpiCommonCreateParams *params, void *externalHandle, + dpiError *error); int dpiEnv__getEncodingInfo(dpiEnv *env, dpiEncodingInfo *info); -int dpiEnv__initError(dpiEnv *env, dpiError *error); //----------------------------------------------------------------------------- // definition of internal dpiError methods //----------------------------------------------------------------------------- -int dpiError__check(dpiError *error, int status, dpiConn *conn, - const char *action); int dpiError__getInfo(dpiError *error, dpiErrorInfo *info); +int dpiError__initHandle(dpiError *error); int dpiError__set(dpiError *error, const char *context, dpiErrorNum errorNum, ...); +int dpiError__setFromOCI(dpiError *error, int status, dpiConn *conn, + const char *action); //----------------------------------------------------------------------------- @@ -1212,7 +1342,7 @@ int dpiGen__endPublicFn(const void *ptr, int returnValue, dpiError *error); int dpiGen__release(void *ptr, dpiHandleTypeNum typeNum, const char *fnName); void dpiGen__setRefCount(void *ptr, dpiError *error, int increment); int dpiGen__startPublicFn(const void *ptr, dpiHandleTypeNum typeNum, - const char *fnName, int needErrorHandle, dpiError *error); + const char *fnName, dpiError *error); //----------------------------------------------------------------------------- @@ -1245,6 +1375,7 @@ int dpiConn__create(dpiConn *conn, const dpiContext *context, const dpiCommonCreateParams *commonParams, dpiConnCreateParams *createParams, dpiError *error); void dpiConn__free(dpiConn *conn, dpiError *error); +int dpiConn__getRawTDO(dpiConn *conn, dpiError *error); int dpiConn__getServerVersion(dpiConn *conn, dpiError *error); @@ -1408,15 +1539,28 @@ int dpiSodaDocCursor__allocate(dpiSodaColl *coll, void *handle, void dpiSodaDocCursor__free(dpiSodaDocCursor *cursor, dpiError *error); +//----------------------------------------------------------------------------- +// definition of internal dpiQueue methods +//----------------------------------------------------------------------------- +int dpiQueue__allocate(dpiConn *conn, const char *name, uint32_t nameLength, + dpiObjectType *payloadType, dpiQueue **queue, dpiError *error); +void dpiQueue__free(dpiQueue *queue, dpiError *error); + + //----------------------------------------------------------------------------- // definition of internal dpiOci methods //----------------------------------------------------------------------------- int dpiOci__aqDeq(dpiConn *conn, const char *queueName, void *options, void *msgProps, void *payloadType, void **payload, void **payloadInd, void **msgId, dpiError *error); +int dpiOci__aqDeqArray(dpiConn *conn, const char *queueName, void *options, + uint32_t *numIters, void **msgProps, void *payloadType, void **payload, void **payloadInd, void **msgId, dpiError *error); int dpiOci__aqEnq(dpiConn *conn, const char *queueName, void *options, void *msgProps, void *payloadType, void **payload, void **payloadInd, void **msgId, dpiError *error); +int dpiOci__aqEnqArray(dpiConn *conn, const char *queueName, void *options, + uint32_t *numIters, void **msgProps, void *payloadType, void **payload, + void **payloadInd, void **msgId, dpiError *error); int dpiOci__arrayDescriptorAlloc(void *envHandle, void **handle, uint32_t handleType, uint32_t arraySize, dpiError *error); int dpiOci__arrayDescriptorFree(void **handle, uint32_t handleType); @@ -1456,6 +1600,8 @@ int dpiOci__dateTimeConstruct(void *envHandle, void *handle, int16_t year, uint8_t month, uint8_t day, uint8_t hour, uint8_t minute, uint8_t second, uint32_t fsecond, const char *tz, size_t tzLength, dpiError *error); +int dpiOci__dateTimeConvert(void *envHandle, void *inDate, void *outDate, + dpiError *error); int dpiOci__dateTimeGetDate(void *envHandle, void *handle, int16_t *year, uint8_t *month, uint8_t *day, dpiError *error); int dpiOci__dateTimeGetTime(void *envHandle, void *handle, uint8_t *hour, @@ -1546,7 +1692,8 @@ int dpiOci__numberToInt(void *number, void *value, unsigned int valueLength, int dpiOci__numberToReal(double *value, void *number, dpiError *error); int dpiOci__objectCopy(dpiObject *obj, void *sourceInstance, void *sourceIndicator, dpiError *error); -int dpiOci__objectFree(dpiObject *obj, int checkError, dpiError *error); +int dpiOci__objectFree(void *envHandle, void *data, int checkError, + dpiError *error); int dpiOci__objectGetAttr(dpiObject *obj, dpiObjectAttr *attr, int16_t *scalarValueIndicator, void **valueIndicator, void **value, void **tdo, dpiError *error); @@ -1599,7 +1746,7 @@ int dpiOci__sodaBulkInsert(dpiSodaColl *coll, void **documents, uint32_t numDocuments, void *outputOptions, uint32_t mode, dpiError *error); int dpiOci__sodaBulkInsertAndGet(dpiSodaColl *coll, void **documents, - uint32_t *numDocuments, void *outputOptions, uint32_t mode, + uint32_t numDocuments, void *outputOptions, uint32_t mode, dpiError *error); int dpiOci__sodaCollCreateWithMetadata(dpiSodaDb *db, const char *name, uint32_t nameLength, const char *metadata, uint32_t metadataLength, @@ -1662,7 +1809,7 @@ int dpiOci__stringPtr(void *envHandle, void *handle, char **ptr); int dpiOci__stringResize(void *envHandle, void **handle, uint32_t newSize, dpiError *error); int dpiOci__stringSize(void *envHandle, void *handle, uint32_t *size); -int dpiOci__subscriptionRegister(dpiConn *conn, void **handle, +int dpiOci__subscriptionRegister(dpiConn *conn, void **handle, uint32_t mode, dpiError *error); int dpiOci__subscriptionUnRegister(dpiConn *conn, dpiSubscr *subscr, dpiError *error); @@ -1690,14 +1837,17 @@ int dpiOci__transRollback(dpiConn *conn, int checkError, dpiError *error); int dpiOci__transStart(dpiConn *conn, dpiError *error); int dpiOci__typeByFullName(dpiConn *conn, const char *name, uint32_t nameLength, void **tdo, dpiError *error); +int dpiOci__typeByName(dpiConn *conn, const char *schema, + uint32_t schemaLength, const char *name, uint32_t nameLength, + void **tdo, dpiError *error); //----------------------------------------------------------------------------- // definition of internal dpiMsgProps methods //----------------------------------------------------------------------------- -int dpiMsgProps__create(dpiMsgProps *props, dpiConn *conn, dpiError *error); -int dpiMsgProps__extractMsgId(dpiMsgProps *props, void *ociRaw, - const char **msgId, uint32_t *msgIdLength, dpiError *error); +int dpiMsgProps__allocate(dpiConn *conn, dpiMsgProps **props, dpiError *error); +void dpiMsgProps__extractMsgId(dpiMsgProps *props, const char **msgId, + uint32_t *msgIdLength); void dpiMsgProps__free(dpiMsgProps *props, dpiError *error); @@ -1708,8 +1858,7 @@ int dpiHandlePool__acquire(dpiHandlePool *pool, void **handle, dpiError *error); int dpiHandlePool__create(dpiHandlePool **pool, dpiError *error); void dpiHandlePool__free(dpiHandlePool *pool); -void dpiHandlePool__release(dpiHandlePool *pool, void *handle, - dpiError *error); +void dpiHandlePool__release(dpiHandlePool *pool, void **handle); //----------------------------------------------------------------------------- @@ -1754,4 +1903,3 @@ void dpiDebug__initialize(void); void dpiDebug__print(const char *format, ...); #endif - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiLob.c b/vendor/github.com/godror/godror/odpi/src/dpiLob.c similarity index 94% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiLob.c rename to vendor/github.com/godror/godror/odpi/src/dpiLob.c index 310953fc6746..55dfcebad6e2 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiLob.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiLob.c @@ -53,10 +53,9 @@ int dpiLob__allocate(dpiConn *conn, const dpiOracleType *type, dpiLob **lob, // dpiLob__check() [INTERNAL] // Check that the LOB is valid and get an error handle for subsequent calls. //----------------------------------------------------------------------------- -static int dpiLob__check(dpiLob *lob, const char *fnName, int needErrorHandle, - dpiError *error) +static int dpiLob__check(dpiLob *lob, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(lob, DPI_HTYPE_LOB, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(lob, DPI_HTYPE_LOB, fnName, error) < 0) return DPI_FAILURE; if (!lob->locator) return dpiError__set(error, "check closed", DPI_ERR_LOB_CLOSED); @@ -210,7 +209,7 @@ int dpiLob_close(dpiLob *lob) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); status = dpiLob__close(lob, 1, &error); return dpiGen__endPublicFn(lob, status, &error); @@ -226,7 +225,7 @@ int dpiLob_closeResource(dpiLob *lob) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); status = dpiOci__lobClose(lob, &error); return dpiGen__endPublicFn(lob, status, &error); @@ -242,7 +241,7 @@ int dpiLob_copy(dpiLob *lob, dpiLob **copiedLob) dpiLob *tempLob; dpiError error; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, copiedLob) if (dpiLob__allocate(lob->conn, lob->type, &tempLob, &error) < 0) @@ -266,7 +265,7 @@ int dpiLob_getBufferSize(dpiLob *lob, uint64_t sizeInChars, { dpiError error; - if (dpiLob__check(lob, __func__, 0, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, sizeInBytes) if (lob->type->oracleTypeNum == DPI_ORACLE_TYPE_CLOB) @@ -287,7 +286,7 @@ int dpiLob_getChunkSize(dpiLob *lob, uint32_t *size) dpiError error; int status; - if (dpiLob__check(lob, __func__, 0, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, size) status = dpiOci__lobGetChunkSize(lob, size, &error); @@ -307,7 +306,7 @@ int dpiLob_getDirectoryAndFileName(dpiLob *lob, const char **directoryAlias, dpiError error; // validate parameters - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, directoryAlias) DPI_CHECK_PTR_NOT_NULL(lob, directoryAliasLength) @@ -344,7 +343,7 @@ int dpiLob_getFileExists(dpiLob *lob, int *exists) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, exists) status = dpiOci__lobFileExists(lob, exists, &error); @@ -361,7 +360,7 @@ int dpiLob_getIsResourceOpen(dpiLob *lob, int *isOpen) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, isOpen) status = dpiOci__lobIsOpen(lob, isOpen, &error); @@ -378,7 +377,7 @@ int dpiLob_getSize(dpiLob *lob, uint64_t *size) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, size) status = dpiOci__lobGetLength2(lob, size, &error); @@ -395,7 +394,7 @@ int dpiLob_openResource(dpiLob *lob) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); status = dpiOci__lobOpen(lob, &error); return dpiGen__endPublicFn(lob, status, &error); @@ -412,7 +411,7 @@ int dpiLob_readBytes(dpiLob *lob, uint64_t offset, uint64_t amount, dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, value) DPI_CHECK_PTR_NOT_NULL(lob, valueLength) @@ -443,7 +442,7 @@ int dpiLob_setDirectoryAndFileName(dpiLob *lob, const char *directoryAlias, dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, directoryAlias) DPI_CHECK_PTR_NOT_NULL(lob, fileName) @@ -463,9 +462,9 @@ int dpiLob_setFromBytes(dpiLob *lob, const char *value, uint64_t valueLength) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); - DPI_CHECK_PTR_NOT_NULL(lob, value) + DPI_CHECK_PTR_AND_LENGTH(lob, value) status = dpiLob__setFromBytes(lob, value, valueLength, &error); return dpiGen__endPublicFn(lob, status, &error); } @@ -480,7 +479,7 @@ int dpiLob_trim(dpiLob *lob, uint64_t newSize) dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); status = dpiOci__lobTrim2(lob, newSize, &error); return dpiGen__endPublicFn(lob, status, &error); @@ -497,10 +496,9 @@ int dpiLob_writeBytes(dpiLob *lob, uint64_t offset, const char *value, dpiError error; int status; - if (dpiLob__check(lob, __func__, 1, &error) < 0) + if (dpiLob__check(lob, __func__, &error) < 0) return dpiGen__endPublicFn(lob, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(lob, value) status = dpiOci__lobWrite2(lob, offset, value, valueLength, &error); return dpiGen__endPublicFn(lob, status, &error); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiMsgProps.c b/vendor/github.com/godror/godror/odpi/src/dpiMsgProps.c similarity index 72% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiMsgProps.c rename to vendor/github.com/godror/godror/odpi/src/dpiMsgProps.c index 24260897b19e..518d2e7d20b7 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiMsgProps.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiMsgProps.c @@ -1,5 +1,5 @@ //----------------------------------------------------------------------------- -// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved. // This program is free software: you can modify it and/or redistribute it // under the terms of: // @@ -17,45 +17,40 @@ #include "dpiImpl.h" //----------------------------------------------------------------------------- -// dpiMsgProps__create() [INTERNAL] -// Create a new subscription structure and return it. In case of error NULL -// is returned. +// dpiMsgProps__allocate() [INTERNAL] +// Create a new message properties structure and return it. In case of error +// NULL is returned. //----------------------------------------------------------------------------- -int dpiMsgProps__create(dpiMsgProps *options, dpiConn *conn, dpiError *error) +int dpiMsgProps__allocate(dpiConn *conn, dpiMsgProps **props, dpiError *error) { + dpiMsgProps *tempProps; + + if (dpiGen__allocate(DPI_HTYPE_MSG_PROPS, conn->env, (void**) &tempProps, + error) < 0) + return DPI_FAILURE; dpiGen__setRefCount(conn, error, 1); - options->conn = conn; - return dpiOci__descriptorAlloc(conn->env->handle, &options->handle, - DPI_OCI_DTYPE_AQMSG_PROPERTIES, "allocate descriptor", error); + tempProps->conn = conn; + if (dpiOci__descriptorAlloc(conn->env->handle, &tempProps->handle, + DPI_OCI_DTYPE_AQMSG_PROPERTIES, "allocate descriptor", + error) < 0) { + dpiMsgProps__free(tempProps, error); + return DPI_FAILURE; + } + + *props = tempProps; + return DPI_SUCCESS; } //----------------------------------------------------------------------------- // dpiMsgProps__extractMsgId() [INTERNAL] -// Extract bytes from the OCIRaw value containing the message id and store -// them in allocated memory on the message properties instance. Then resize the -// OCIRaw value so the memory can be reclaimed. +// Extract bytes from the OCIRaw value containing the message id. //----------------------------------------------------------------------------- -int dpiMsgProps__extractMsgId(dpiMsgProps *props, void *ociRaw, - const char **msgId, uint32_t *msgIdLength, dpiError *error) +void dpiMsgProps__extractMsgId(dpiMsgProps *props, const char **msgId, + uint32_t *msgIdLength) { - const char *rawPtr; - - dpiOci__rawPtr(props->env->handle, ociRaw, (void**) &rawPtr); - dpiOci__rawSize(props->env->handle, ociRaw, msgIdLength); - if (*msgIdLength > props->bufferLength) { - if (props->buffer) { - dpiUtils__freeMemory(props->buffer); - props->buffer = NULL; - } - if (dpiUtils__allocateMemory(1, *msgIdLength, 0, - "allocate msgid buffer", (void**) &props->buffer, error) < 0) - return DPI_FAILURE; - } - memcpy(props->buffer, rawPtr, *msgIdLength); - *msgId = props->buffer; - dpiOci__rawResize(props->env->handle, &ociRaw, 0, error); - return DPI_SUCCESS; + dpiOci__rawPtr(props->env->handle, props->msgIdRaw, (void**) msgId); + dpiOci__rawSize(props->env->handle, props->msgIdRaw, msgIdLength); } @@ -69,14 +64,22 @@ void dpiMsgProps__free(dpiMsgProps *props, dpiError *error) dpiOci__descriptorFree(props->handle, DPI_OCI_DTYPE_AQMSG_PROPERTIES); props->handle = NULL; } + if (props->payloadObj) { + dpiGen__setRefCount(props->payloadObj, error, -1); + props->payloadObj = NULL; + } + if (props->payloadRaw) { + dpiOci__rawResize(props->env->handle, &props->payloadRaw, 0, error); + props->payloadRaw = NULL; + } + if (props->msgIdRaw) { + dpiOci__rawResize(props->env->handle, &props->msgIdRaw, 0, error); + props->msgIdRaw = NULL; + } if (props->conn) { dpiGen__setRefCount(props->conn, error, -1); props->conn = NULL; } - if (props->buffer) { - dpiUtils__freeMemory(props->buffer); - props->buffer = NULL; - } dpiUtils__freeMemory(props); } @@ -91,8 +94,7 @@ static int dpiMsgProps__getAttrValue(dpiMsgProps *props, uint32_t attribute, dpiError error; int status; - if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, fnName, 1, - &error) < 0) + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, fnName, &error) < 0) return dpiGen__endPublicFn(props, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(props, value) DPI_CHECK_PTR_NOT_NULL(props, valueLength) @@ -112,8 +114,7 @@ static int dpiMsgProps__setAttrValue(dpiMsgProps *props, uint32_t attribute, dpiError error; int status; - if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, fnName, 1, - &error) < 0) + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, fnName, &error) < 0) return dpiGen__endPublicFn(props, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(props, value) status = dpiOci__attrSet(props->handle, DPI_OCI_DTYPE_AQMSG_PROPERTIES, @@ -181,7 +182,7 @@ int dpiMsgProps_getEnqTime(dpiMsgProps *props, dpiTimestamp *value) dpiOciDate ociValue; dpiError error; - if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, 1, + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, &error) < 0) return dpiGen__endPublicFn(props, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(props, value) @@ -240,6 +241,32 @@ int dpiMsgProps_getNumAttempts(dpiMsgProps *props, int32_t *value) } +//----------------------------------------------------------------------------- +// dpiMsgProps_getMsgId() [PUBLIC] +// Return the message id for the message (available after enqueuing or +// dequeuing a message). +//----------------------------------------------------------------------------- +int dpiMsgProps_getMsgId(dpiMsgProps *props, const char **value, + uint32_t *valueLength) +{ + dpiError error; + + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, + &error) < 0) + return dpiGen__endPublicFn(props, DPI_FAILURE, &error); + DPI_CHECK_PTR_NOT_NULL(props, value) + DPI_CHECK_PTR_NOT_NULL(props, valueLength) + if (!props->msgIdRaw) { + *value = NULL; + *valueLength = 0; + } else { + dpiOci__rawPtr(props->env->handle, props->msgIdRaw, (void**) value); + dpiOci__rawSize(props->env->handle, props->msgIdRaw, valueLength); + } + return dpiGen__endPublicFn(props, DPI_SUCCESS, &error); +} + + //----------------------------------------------------------------------------- // dpiMsgProps_getOriginalMsgId() [PUBLIC] // Return the original message id for the message. @@ -250,7 +277,7 @@ int dpiMsgProps_getOriginalMsgId(dpiMsgProps *props, const char **value, dpiError error; void *rawValue; - if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, 1, + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, &error) < 0) return dpiGen__endPublicFn(props, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(props, value) @@ -265,6 +292,36 @@ int dpiMsgProps_getOriginalMsgId(dpiMsgProps *props, const char **value, } +//----------------------------------------------------------------------------- +// dpiMsgProps_getPayload() [PUBLIC] +// Get the payload for the message (as an object or a series of bytes). +//----------------------------------------------------------------------------- +int dpiMsgProps_getPayload(dpiMsgProps *props, dpiObject **obj, + const char **value, uint32_t *valueLength) +{ + dpiError error; + + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, + &error) < 0) + return dpiGen__endPublicFn(props, DPI_FAILURE, &error); + if (obj) + *obj = props->payloadObj; + if (value && valueLength) { + if (props->payloadRaw) { + dpiOci__rawPtr(props->env->handle, props->payloadRaw, + (void**) value); + dpiOci__rawSize(props->env->handle, props->payloadRaw, + valueLength); + } else { + *value = NULL; + *valueLength = 0; + } + } + + return dpiGen__endPublicFn(props, DPI_SUCCESS, &error); +} + + //----------------------------------------------------------------------------- // dpiMsgProps_getPriority() [PUBLIC] // Return the priority of the message. @@ -359,7 +416,7 @@ int dpiMsgProps_setOriginalMsgId(dpiMsgProps *props, const char *value, dpiError error; int status; - if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, 1, + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, &error) < 0) return dpiGen__endPublicFn(props, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(props, value) @@ -374,6 +431,51 @@ int dpiMsgProps_setOriginalMsgId(dpiMsgProps *props, const char *value, } +//----------------------------------------------------------------------------- +// dpiMsgProps_setPayloadBytes() [PUBLIC] +// Set the payload for the message (as a series of bytes). +//----------------------------------------------------------------------------- +int dpiMsgProps_setPayloadBytes(dpiMsgProps *props, const char *value, + uint32_t valueLength) +{ + dpiError error; + int status; + + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, + &error) < 0) + return dpiGen__endPublicFn(props, DPI_FAILURE, &error); + DPI_CHECK_PTR_NOT_NULL(props, value) + if (props->payloadRaw) { + dpiOci__rawResize(props->env->handle, &props->payloadRaw, 0, &error); + props->payloadRaw = NULL; + } + status = dpiOci__rawAssignBytes(props->env->handle, value, valueLength, + &props->payloadRaw, &error); + return dpiGen__endPublicFn(props, status, &error); +} + + +//----------------------------------------------------------------------------- +// dpiMsgProps_setPayloadObject() [PUBLIC] +// Set the payload for the message (as an object). +//----------------------------------------------------------------------------- +int dpiMsgProps_setPayloadObject(dpiMsgProps *props, dpiObject *obj) +{ + dpiError error; + + if (dpiGen__startPublicFn(props, DPI_HTYPE_MSG_PROPS, __func__, + &error) < 0) + return dpiGen__endPublicFn(props, DPI_FAILURE, &error); + if (dpiGen__checkHandle(obj, DPI_HTYPE_OBJECT, "check object", &error) < 0) + return dpiGen__endPublicFn(props, DPI_FAILURE, &error); + if (props->payloadObj) + dpiGen__setRefCount(props->payloadObj, &error, -1); + dpiGen__setRefCount(obj, &error, 1); + props->payloadObj = obj; + return dpiGen__endPublicFn(props, DPI_SUCCESS, &error); +} + + //----------------------------------------------------------------------------- // dpiMsgProps_setPriority() [PUBLIC] // Set the priority of the message. @@ -383,4 +485,3 @@ int dpiMsgProps_setPriority(dpiMsgProps *props, int32_t value) return dpiMsgProps__setAttrValue(props, DPI_OCI_ATTR_PRIORITY, __func__, &value, 0); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiObject.c b/vendor/github.com/godror/godror/odpi/src/dpiObject.c similarity index 87% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiObject.c rename to vendor/github.com/godror/godror/odpi/src/dpiObject.c index 001789ddf5a4..338c68aeeda2 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiObject.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiObject.c @@ -16,6 +16,10 @@ #include "dpiImpl.h" +// forward declarations of internal functions only used in this file +int dpiObject__closeHelper(dpiObject *obj, int checkError, dpiError *error); + + //----------------------------------------------------------------------------- // dpiObject__allocate() [INTERNAL] // Allocate and initialize an object structure. @@ -66,7 +70,7 @@ int dpiObject__allocate(dpiObjectType *objType, void *instance, static int dpiObject__check(dpiObject *obj, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(obj, DPI_HTYPE_OBJECT, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(obj, DPI_HTYPE_OBJECT, fnName, error) < 0) return DPI_FAILURE; return dpiConn__checkConnected(obj->type->conn, error); } @@ -94,7 +98,8 @@ static int dpiObject__checkIsCollection(dpiObject *obj, const char *fnName, // Clear the Oracle value after use. //----------------------------------------------------------------------------- static void dpiObject__clearOracleValue(dpiObject *obj, dpiError *error, - dpiOracleDataBuffer *buffer, dpiOracleTypeNum oracleTypeNum) + dpiOracleDataBuffer *buffer, dpiLob *lob, + dpiOracleTypeNum oracleTypeNum) { switch (oracleTypeNum) { case DPI_ORACLE_TYPE_CHAR: @@ -129,12 +134,9 @@ static void dpiObject__clearOracleValue(dpiObject *obj, dpiError *error, case DPI_ORACLE_TYPE_NCLOB: case DPI_ORACLE_TYPE_BLOB: case DPI_ORACLE_TYPE_BFILE: - if (buffer->asLobLocator) { - dpiOci__lobFreeTemporary(obj->type->conn, buffer->asLobLocator, - 0, error); - dpiOci__descriptorFree(buffer->asLobLocator, - DPI_OCI_DTYPE_LOB); - } + if (lob) + dpiGen__setRefCount(lob, error, -1); + break; default: break; }; @@ -171,7 +173,7 @@ int dpiObject__close(dpiObject *obj, int checkError, dpiError *error) // flag; again, this must be done while holding the lock (if in threaded // mode) in order to avoid race conditions! if (obj->instance && !obj->dependsOnObj) { - if (dpiOci__objectFree(obj, checkError, error) < 0) { + if (dpiObject__closeHelper(obj, checkError, error) < 0) { if (obj->env->threaded) dpiMutex__acquire(obj->env->mutex); obj->closing = 0; @@ -179,17 +181,33 @@ int dpiObject__close(dpiObject *obj, int checkError, dpiError *error) dpiMutex__release(obj->env->mutex); return DPI_FAILURE; } - if (!obj->type->conn->closing) - dpiHandleList__removeHandle(obj->type->conn->objects, - obj->openSlotNum); - obj->instance = NULL; - obj->indicator = NULL; } return DPI_SUCCESS; } +//----------------------------------------------------------------------------- +// dpiObject__closeHelper() [INTERNAL] +// Helper function for closing an object. +//----------------------------------------------------------------------------- +int dpiObject__closeHelper(dpiObject *obj, int checkError, dpiError *error) +{ + if (dpiOci__objectFree(obj->env->handle, obj->instance, checkError, + error) < 0) + return DPI_FAILURE; + obj->instance = NULL; + if (obj->freeIndicator && dpiOci__objectFree(obj->env->handle, + obj->indicator, checkError, error) < 0) + return DPI_FAILURE; + obj->indicator = NULL; + if (!obj->type->conn->closing) + dpiHandleList__removeHandle(obj->type->conn->objects, + obj->openSlotNum); + return DPI_SUCCESS; +} + + //----------------------------------------------------------------------------- // dpiObject__free() [INTERNAL] // Free the memory for an object. @@ -260,9 +278,10 @@ static int dpiObject__fromOracleValue(dpiObject *obj, dpiError *error, } break; case DPI_ORACLE_TYPE_NATIVE_INT: - if (nativeTypeNum == DPI_NATIVE_TYPE_INT64) - return dpiDataBuffer__fromOracleNumberAsInteger(&data->value, - error, value->asNumber); + if (nativeTypeNum == DPI_NATIVE_TYPE_INT64) { + data->value.asInt64 = *value->asInt32; + return DPI_SUCCESS; + } break; case DPI_ORACLE_TYPE_NATIVE_FLOAT: if (nativeTypeNum == DPI_NATIVE_TYPE_FLOAT) { @@ -294,17 +313,26 @@ static int dpiObject__fromOracleValue(dpiObject *obj, dpiError *error, if (nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) return dpiDataBuffer__fromOracleDate(&data->value, value->asDate); + if (nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) + return dpiDataBuffer__fromOracleDateAsDouble(&data->value, + obj->env, error, value->asDate); break; case DPI_ORACLE_TYPE_TIMESTAMP: if (nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) return dpiDataBuffer__fromOracleTimestamp(&data->value, obj->env, error, *value->asTimestamp, 0); + if (nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) + return dpiDataBuffer__fromOracleTimestampAsDouble(&data->value, + obj->env, error, *value->asTimestamp); break; case DPI_ORACLE_TYPE_TIMESTAMP_TZ: case DPI_ORACLE_TYPE_TIMESTAMP_LTZ: if (nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) return dpiDataBuffer__fromOracleTimestamp(&data->value, obj->env, error, *value->asTimestamp, 1); + if (nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) + return dpiDataBuffer__fromOracleTimestampAsDouble(&data->value, + obj->env, error, *value->asTimestamp); break; case DPI_ORACLE_TYPE_OBJECT: if (typeInfo->objectType && @@ -366,8 +394,8 @@ static int dpiObject__fromOracleValue(dpiObject *obj, dpiError *error, //----------------------------------------------------------------------------- static int dpiObject__toOracleValue(dpiObject *obj, dpiError *error, const dpiDataTypeInfo *dataTypeInfo, dpiOracleDataBuffer *buffer, - void **ociValue, int16_t *valueIndicator, void **objectIndicator, - dpiNativeTypeNum nativeTypeNum, dpiData *data) + dpiLob **lob, void **ociValue, int16_t *valueIndicator, + void **objectIndicator, dpiNativeTypeNum nativeTypeNum, dpiData *data) { dpiOracleTypeNum valueOracleTypeNum; uint32_t handleType; @@ -413,6 +441,12 @@ static int dpiObject__toOracleValue(dpiObject *obj, dpiError *error, } break; case DPI_ORACLE_TYPE_NATIVE_INT: + if (nativeTypeNum == DPI_NATIVE_TYPE_INT64) { + buffer->asInt32 = (int32_t) data->value.asInt64; + *ociValue = &buffer->asInt32; + return DPI_SUCCESS; + } + break; case DPI_ORACLE_TYPE_NUMBER: *ociValue = &buffer->asNumber; if (nativeTypeNum == DPI_NATIVE_TYPE_INT64) @@ -448,25 +482,35 @@ static int dpiObject__toOracleValue(dpiObject *obj, dpiError *error, if (nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) return dpiDataBuffer__toOracleDate(&data->value, &buffer->asDate); + if (nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) + return dpiDataBuffer__toOracleDateFromDouble(&data->value, + obj->env, error, &buffer->asDate); break; case DPI_ORACLE_TYPE_TIMESTAMP: case DPI_ORACLE_TYPE_TIMESTAMP_TZ: case DPI_ORACLE_TYPE_TIMESTAMP_LTZ: buffer->asTimestamp = NULL; - if (nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) { - if (valueOracleTypeNum == DPI_ORACLE_TYPE_TIMESTAMP) + if (nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP || + nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) { + if (valueOracleTypeNum == DPI_ORACLE_TYPE_TIMESTAMP_LTZ || + nativeTypeNum == DPI_NATIVE_TYPE_DOUBLE) { + handleType = DPI_OCI_DTYPE_TIMESTAMP_LTZ; + } else if (valueOracleTypeNum == DPI_ORACLE_TYPE_TIMESTAMP) { handleType = DPI_OCI_DTYPE_TIMESTAMP; - else if (valueOracleTypeNum == DPI_ORACLE_TYPE_TIMESTAMP_TZ) + } else { handleType = DPI_OCI_DTYPE_TIMESTAMP_TZ; - else handleType = DPI_OCI_DTYPE_TIMESTAMP_LTZ; + } if (dpiOci__descriptorAlloc(obj->env->handle, &buffer->asTimestamp, handleType, "allocate timestamp", error) < 0) return DPI_FAILURE; *ociValue = buffer->asTimestamp; - return dpiDataBuffer__toOracleTimestamp(&data->value, obj->env, - error, buffer->asTimestamp, - (valueOracleTypeNum != DPI_ORACLE_TYPE_TIMESTAMP)); + if (nativeTypeNum == DPI_NATIVE_TYPE_TIMESTAMP) + return dpiDataBuffer__toOracleTimestamp(&data->value, + obj->env, error, buffer->asTimestamp, + (valueOracleTypeNum != DPI_ORACLE_TYPE_TIMESTAMP)); + return dpiDataBuffer__toOracleTimestampFromDouble(&data->value, + obj->env, error, buffer->asTimestamp); } break; case DPI_ORACLE_TYPE_OBJECT: @@ -503,21 +547,15 @@ static int dpiObject__toOracleValue(dpiObject *obj, dpiError *error, return DPI_SUCCESS; } else if (nativeTypeNum == DPI_NATIVE_TYPE_BYTES) { const dpiOracleType *lobType; - dpiLob *tempLob; lobType = dpiOracleType__getFromNum(valueOracleTypeNum, error); - if (dpiLob__allocate(obj->type->conn, lobType, &tempLob, - error) < 0) + if (dpiLob__allocate(obj->type->conn, lobType, lob, error) < 0) return DPI_FAILURE; bytes = &data->value.asBytes; - if (dpiLob__setFromBytes(tempLob, bytes->ptr, bytes->length, - error) < 0) { - dpiLob__free(tempLob, error); + if (dpiLob__setFromBytes(*lob, bytes->ptr, bytes->length, + error) < 0) return DPI_FAILURE; - } - buffer->asLobLocator = tempLob->locator; - *ociValue = tempLob->locator; - tempLob->locator = NULL; - dpiLob__free(tempLob, error); + buffer->asLobLocator = (*lob)->locator; + *ociValue = (*lob)->locator; return DPI_SUCCESS; } break; @@ -550,6 +588,7 @@ int dpiObject_appendElement(dpiObject *obj, dpiNativeTypeNum nativeTypeNum, { dpiOracleDataBuffer valueBuffer; int16_t scalarValueIndicator; + dpiLob *lob = NULL; void *indicator; dpiError error; void *ociValue; @@ -558,15 +597,16 @@ int dpiObject_appendElement(dpiObject *obj, dpiNativeTypeNum nativeTypeNum, if (dpiObject__checkIsCollection(obj, __func__, &error) < 0) return dpiGen__endPublicFn(obj, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(obj, data) - if (dpiObject__toOracleValue(obj, &error, &obj->type->elementTypeInfo, - &valueBuffer, &ociValue, &scalarValueIndicator, - (void**) &indicator, nativeTypeNum, data) < 0) - return dpiGen__endPublicFn(obj, DPI_FAILURE, &error); - if (!indicator) - indicator = &scalarValueIndicator; - status = dpiOci__collAppend(obj->type->conn, ociValue, indicator, - obj->instance, &error); - dpiObject__clearOracleValue(obj, &error, &valueBuffer, + status = dpiObject__toOracleValue(obj, &error, &obj->type->elementTypeInfo, + &valueBuffer, &lob, &ociValue, &scalarValueIndicator, + (void**) &indicator, nativeTypeNum, data); + if (status == DPI_SUCCESS) { + if (!indicator) + indicator = &scalarValueIndicator; + status = dpiOci__collAppend(obj->type->conn, ociValue, indicator, + obj->instance, &error); + } + dpiObject__clearOracleValue(obj, &error, &valueBuffer, lob, obj->type->elementTypeInfo.oracleTypeNum); return dpiGen__endPublicFn(obj, status, &error); } @@ -836,6 +876,7 @@ int dpiObject_setAttributeValue(dpiObject *obj, dpiObjectAttr *attr, void *valueIndicator, *ociValue; dpiOracleDataBuffer valueBuffer; int16_t scalarValueIndicator; + dpiLob *lob = NULL; dpiError error; int status; @@ -861,15 +902,15 @@ int dpiObject_setAttributeValue(dpiObject *obj, dpiObjectAttr *attr, } // convert to input data format - if (dpiObject__toOracleValue(obj, &error, &attr->typeInfo, &valueBuffer, - &ociValue, &scalarValueIndicator, &valueIndicator, nativeTypeNum, - data) < 0) - return dpiGen__endPublicFn(obj, DPI_FAILURE, &error); + status = dpiObject__toOracleValue(obj, &error, &attr->typeInfo, + &valueBuffer, &lob, &ociValue, &scalarValueIndicator, + &valueIndicator, nativeTypeNum, data); // set attribute value - status = dpiOci__objectSetAttr(obj, attr, scalarValueIndicator, - valueIndicator, ociValue, &error); - dpiObject__clearOracleValue(obj, &error, &valueBuffer, + if (status == DPI_SUCCESS) + status = dpiOci__objectSetAttr(obj, attr, scalarValueIndicator, + valueIndicator, ociValue, &error); + dpiObject__clearOracleValue(obj, &error, &valueBuffer, lob, attr->typeInfo.oracleTypeNum); return dpiGen__endPublicFn(obj, status, &error); } @@ -884,6 +925,7 @@ int dpiObject_setElementValueByIndex(dpiObject *obj, int32_t index, { dpiOracleDataBuffer valueBuffer; int16_t scalarValueIndicator; + dpiLob *lob = NULL; void *indicator; dpiError error; void *ociValue; @@ -892,15 +934,16 @@ int dpiObject_setElementValueByIndex(dpiObject *obj, int32_t index, if (dpiObject__checkIsCollection(obj, __func__, &error) < 0) return dpiGen__endPublicFn(obj, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(obj, data) - if (dpiObject__toOracleValue(obj, &error, &obj->type->elementTypeInfo, - &valueBuffer, &ociValue, &scalarValueIndicator, - (void**) &indicator, nativeTypeNum, data) < 0) - return dpiGen__endPublicFn(obj, DPI_FAILURE, &error); - if (!indicator) - indicator = &scalarValueIndicator; - status = dpiOci__collAssignElem(obj->type->conn, index, ociValue, - indicator, obj->instance, &error); - dpiObject__clearOracleValue(obj, &error, &valueBuffer, + status = dpiObject__toOracleValue(obj, &error, &obj->type->elementTypeInfo, + &valueBuffer, &lob, &ociValue, &scalarValueIndicator, + (void**) &indicator, nativeTypeNum, data); + if (status == DPI_SUCCESS) { + if (!indicator) + indicator = &scalarValueIndicator; + status = dpiOci__collAssignElem(obj->type->conn, index, ociValue, + indicator, obj->instance, &error); + } + dpiObject__clearOracleValue(obj, &error, &valueBuffer, lob, obj->type->elementTypeInfo.oracleTypeNum); return dpiGen__endPublicFn(obj, status, &error); } @@ -921,4 +964,3 @@ int dpiObject_trim(dpiObject *obj, uint32_t numToTrim) &error); return dpiGen__endPublicFn(obj, status, &error); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiObjectAttr.c b/vendor/github.com/godror/godror/odpi/src/dpiObjectAttr.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiObjectAttr.c rename to vendor/github.com/godror/godror/odpi/src/dpiObjectAttr.c index 573abc68217f..45d623ef5738 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiObjectAttr.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiObjectAttr.c @@ -93,7 +93,7 @@ int dpiObjectAttr_getInfo(dpiObjectAttr *attr, dpiObjectAttrInfo *info) { dpiError error; - if (dpiGen__startPublicFn(attr, DPI_HTYPE_OBJECT_ATTR, __func__, 0, + if (dpiGen__startPublicFn(attr, DPI_HTYPE_OBJECT_ATTR, __func__, &error) < 0) return dpiGen__endPublicFn(attr, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(attr, info) @@ -112,4 +112,3 @@ int dpiObjectAttr_release(dpiObjectAttr *attr) { return dpiGen__release(attr, DPI_HTYPE_OBJECT_ATTR, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiObjectType.c b/vendor/github.com/godror/godror/odpi/src/dpiObjectType.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiObjectType.c rename to vendor/github.com/godror/godror/odpi/src/dpiObjectType.c index 9030b3978391..fbb2cf240c89 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiObjectType.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiObjectType.c @@ -57,7 +57,7 @@ int dpiObjectType__allocate(dpiConn *conn, void *param, static int dpiObjectType__check(dpiObjectType *objType, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(objType, DPI_HTYPE_OBJECT_TYPE, fnName, 1, + if (dpiGen__startPublicFn(objType, DPI_HTYPE_OBJECT_TYPE, fnName, error) < 0) return DPI_FAILURE; return dpiConn__checkConnected(objType->conn, error); @@ -319,7 +319,7 @@ int dpiObjectType_getInfo(dpiObjectType *objType, dpiObjectTypeInfo *info) { dpiError error; - if (dpiGen__startPublicFn(objType, DPI_HTYPE_OBJECT_TYPE, __func__, 0, + if (dpiGen__startPublicFn(objType, DPI_HTYPE_OBJECT_TYPE, __func__, &error) < 0) return dpiGen__endPublicFn(objType, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(objType, info) @@ -342,4 +342,3 @@ int dpiObjectType_release(dpiObjectType *objType) { return dpiGen__release(objType, DPI_HTYPE_OBJECT_TYPE, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiOci.c b/vendor/github.com/godror/godror/odpi/src/dpiOci.c similarity index 85% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiOci.c rename to vendor/github.com/godror/godror/odpi/src/dpiOci.c index 2ad76c688d54..c731324cb487 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiOci.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiOci.c @@ -35,14 +35,35 @@ static void *dpiOci__reallocMem(void *unused, void *ptr, size_t newSize); error) < 0) \ return DPI_FAILURE; +// macro to ensure that an error handle is available +#define DPI_OCI_ENSURE_ERROR_HANDLE(error) \ + if (!error->handle && dpiError__initHandle(error) < 0) \ + return DPI_FAILURE; + +// macros to simplify code for checking results of OCI calls +#define DPI_OCI_ERROR_OCCURRED(status) \ + (status != DPI_OCI_SUCCESS && status != DPI_OCI_SUCCESS_WITH_INFO) +#define DPI_OCI_CHECK_AND_RETURN(error, status, conn, action) \ + if (DPI_OCI_ERROR_OCCURRED(status)) \ + return dpiError__setFromOCI(error, status, conn, action); \ + return DPI_SUCCESS; + // typedefs for all OCI functions used by ODPI-C typedef int (*dpiOciFnType__aqDeq)(void *svchp, void *errhp, const char *queue_name, void *deqopt, void *msgprop, void *payload_tdo, void **payload, void **payload_ind, void **msgid, uint32_t flags); +typedef int (*dpiOciFnType__aqDeqArray)(void *svchp, void *errhp, + const char *queue_name, void *deqopt, uint32_t *iters, void **msgprop, + void *payload_tdo, void **payload, void **payload_ind, void **msgid, + void *ctxp, void *deqcbfp, uint32_t flags); typedef int (*dpiOciFnType__aqEnq)(void *svchp, void *errhp, const char *queue_name, void *enqopt, void *msgprop, void *payload_tdo, void **payload, void **payload_ind, void **msgid, uint32_t flags); +typedef int (*dpiOciFnType__aqEnqArray)(void *svchp, void *errhp, + const char *queue_name, void *enqopt, uint32_t *iters, void **msgprop, + void *payload_tdo, void **payload, void **payload_ind, void **msgid, + void *ctxp, void *enqcbfp, uint32_t flags); typedef int (*dpiOciFnType__arrayDescriptorAlloc)(const void *parenth, void **descpp, const uint32_t type, uint32_t array_size, const size_t xtramem_sz, void **usrmempp); @@ -99,6 +120,8 @@ typedef int (*dpiOciFnType__dateTimeConstruct)(void *hndl, void *err, void *datetime, int16_t yr, uint8_t mnth, uint8_t dy, uint8_t hr, uint8_t mm, uint8_t ss, uint32_t fsec, const char *tz, size_t tzLength); +typedef int (*dpiOciFnType__dateTimeConvert)(void *hndl, void *err, + void *indate, void *outdate); typedef int (*dpiOciFnType__dateTimeGetDate)(void *hndl, void *err, const void *date, int16_t *yr, uint8_t *mnth, uint8_t *dy); typedef int (*dpiOciFnType__dateTimeGetTime)(void *hndl, void *err, @@ -283,6 +306,12 @@ typedef int (*dpiOciFnType__sessionRelease)(void *svchp, void *errhp, typedef int (*dpiOciFnType__shardingKeyColumnAdd)(void *shardingKey, void *errhp, void *col, uint32_t colLen, uint16_t colType, uint32_t mode); +typedef int (*dpiOciFnType__sodaBulkInsert)(void *svchp, + void *collection, void **documentarray, uint32_t arraylen, + void *opoptns, void *errhp, uint32_t mode); +typedef int (*dpiOciFnType__sodaBulkInsertAndGet)(void *svchp, + void *collection, void **documentarray, uint32_t arraylen, + void *opoptns, void *errhp, uint32_t mode); typedef int (*dpiOciFnType__sodaCollCreateWithMetadata)(void *svchp, const char *collname, uint32_t collnamelen, const char *metadata, uint32_t metadatalen, void **collection, void *errhp, uint32_t mode); @@ -389,6 +418,10 @@ typedef int (*dpiOciFnType__typeByFullName)(void *env, void *err, uint32_t full_type_name_length, const char *version_name, uint32_t version_name_length, uint16_t pin_duration, int get_option, void **tdo); +typedef int (*dpiOciFnType__typeByName)(void *env, void *err, const void *svc, + const char *schema_name, uint32_t s_length, const char *type_name, + uint32_t t_length, const char *version_name, uint32_t v_length, + uint16_t pin_duration, int get_option, void **tdo); // library handle for dynamically loaded OCI library @@ -400,16 +433,18 @@ static const char *dpiOciLibNames[] = { "oci.dll", #elif __APPLE__ "libclntsh.dylib", + "libclntsh.dylib.19.1", "libclntsh.dylib.18.1", "libclntsh.dylib.12.1", "libclntsh.dylib.11.1", - "libclntsh.dylib.19.1", + "libclntsh.dylib.20.1", #else "libclntsh.so", + "libclntsh.so.19.1", "libclntsh.so.18.1", "libclntsh.so.12.1", "libclntsh.so.11.1", - "libclntsh.so.19.1", + "libclntsh.so.20.1", #endif NULL }; @@ -429,7 +464,9 @@ static dpiVersionInfo dpiOciLibVersionInfo; // all OCI symbols used by ODPI-C static struct { dpiOciFnType__aqDeq fnAqDeq; + dpiOciFnType__aqDeqArray fnAqDeqArray; dpiOciFnType__aqEnq fnAqEnq; + dpiOciFnType__aqEnqArray fnAqEnqArray; dpiOciFnType__arrayDescriptorAlloc fnArrayDescriptorAlloc; dpiOciFnType__arrayDescriptorFree fnArrayDescriptorFree; dpiOciFnType__attrGet fnAttrGet; @@ -450,6 +487,7 @@ static struct { dpiOciFnType__contextGetValue fnContextGetValue; dpiOciFnType__contextSetValue fnContextSetValue; dpiOciFnType__dateTimeConstruct fnDateTimeConstruct; + dpiOciFnType__dateTimeConvert fnDateTimeConvert; dpiOciFnType__dateTimeGetDate fnDateTimeGetDate; dpiOciFnType__dateTimeGetTime fnDateTimeGetTime; dpiOciFnType__dateTimeGetTimeZoneOffset fnDateTimeGetTimeZoneOffset; @@ -526,6 +564,8 @@ static struct { dpiOciFnType__sessionRelease fnSessionRelease; dpiOciFnType__shardingKeyColumnAdd fnShardingKeyColumnAdd; dpiOciFnType__stmtExecute fnStmtExecute; + dpiOciFnType__sodaBulkInsert fnSodaBulkInsert; + dpiOciFnType__sodaBulkInsertAndGet fnSodaBulkInsertAndGet; dpiOciFnType__sodaCollCreateWithMetadata fnSodaCollCreateWithMetadata; dpiOciFnType__sodaCollDrop fnSodaCollDrop; dpiOciFnType__sodaCollGetNext fnSodaCollGetNext; @@ -572,6 +612,7 @@ static struct { dpiOciFnType__transRollback fnTransRollback; dpiOciFnType__transStart fnTransStart; dpiOciFnType__typeByFullName fnTypeByFullName; + dpiOciFnType__typeByName fnTypeByName; } dpiOciSymbols; @@ -580,7 +621,7 @@ static struct { // Wrapper for OCI allocation of memory, only used when debugging memory // allocation. //----------------------------------------------------------------------------- -static void *dpiOci__allocateMem(void *unused, size_t size) +static void *dpiOci__allocateMem(UNUSED void *unused, size_t size) { void *ptr; @@ -601,10 +642,30 @@ int dpiOci__aqDeq(dpiConn *conn, const char *queueName, void *options, int status; DPI_OCI_LOAD_SYMBOL("OCIAQDeq", dpiOciSymbols.fnAqDeq) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnAqDeq)(conn->handle, error->handle, queueName, options, msgProps, payloadType, payload, payloadInd, msgId, DPI_OCI_DEFAULT); - return dpiError__check(error, status, conn, "dequeue message"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "dequeue message"); +} + + +//----------------------------------------------------------------------------- +// dpiOci__aqDeqArray() [INTERNAL] +// Wrapper for OCIAQDeqArray(). +//----------------------------------------------------------------------------- +int dpiOci__aqDeqArray(dpiConn *conn, const char *queueName, void *options, + uint32_t *numIters, void **msgProps, void *payloadType, void **payload, + void **payloadInd, void **msgId, dpiError *error) +{ + int status; + + DPI_OCI_LOAD_SYMBOL("OCIAQDeqArray", dpiOciSymbols.fnAqDeqArray) + DPI_OCI_ENSURE_ERROR_HANDLE(error) + status = (*dpiOciSymbols.fnAqDeqArray)(conn->handle, error->handle, + queueName, options, numIters, msgProps, payloadType, payload, + payloadInd, msgId, NULL, NULL, DPI_OCI_DEFAULT); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "dequeue messages"); } @@ -619,10 +680,30 @@ int dpiOci__aqEnq(dpiConn *conn, const char *queueName, void *options, int status; DPI_OCI_LOAD_SYMBOL("OCIAQEnq", dpiOciSymbols.fnAqEnq) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnAqEnq)(conn->handle, error->handle, queueName, options, msgProps, payloadType, payload, payloadInd, msgId, DPI_OCI_DEFAULT); - return dpiError__check(error, status, conn, "enqueue message"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "enqueue message"); +} + + +//----------------------------------------------------------------------------- +// dpiOci__aqEnqArray() [INTERNAL] +// Wrapper for OCIAQEnqArray(). +//----------------------------------------------------------------------------- +int dpiOci__aqEnqArray(dpiConn *conn, const char *queueName, void *options, + uint32_t *numIters, void **msgProps, void *payloadType, void **payload, + void **payloadInd, void **msgId, dpiError *error) +{ + int status; + + DPI_OCI_LOAD_SYMBOL("OCIAQEnqArray", dpiOciSymbols.fnAqEnqArray) + DPI_OCI_ENSURE_ERROR_HANDLE(error) + status = (*dpiOciSymbols.fnAqEnqArray)(conn->handle, error->handle, + queueName, options, numIters, msgProps, payloadType, payload, + payloadInd, msgId, NULL, NULL, DPI_OCI_DEFAULT); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "enqueue messages"); } @@ -639,7 +720,7 @@ int dpiOci__arrayDescriptorAlloc(void *envHandle, void **handle, dpiOciSymbols.fnArrayDescriptorAlloc) status = (*dpiOciSymbols.fnArrayDescriptorAlloc)(envHandle, handle, handleType, arraySize, 0, NULL); - return dpiError__check(error, status, NULL, "allocate descriptors"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "allocate descriptors"); } @@ -672,11 +753,12 @@ int dpiOci__attrGet(const void *handle, uint32_t handleType, void *ptr, { int status; + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnAttrGet)(handle, handleType, ptr, size, attribute, error->handle); - if (action) - return dpiError__check(error, status, NULL, action); - return DPI_SUCCESS; + if (!action) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, action); } @@ -689,11 +771,12 @@ int dpiOci__attrSet(void *handle, uint32_t handleType, void *ptr, { int status; + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnAttrSet)(handle, handleType, ptr, size, attribute, error->handle); - if (action) - return dpiError__check(error, status, NULL, action); - return DPI_SUCCESS; + if (!action) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, action); } @@ -707,6 +790,7 @@ int dpiOci__bindByName(dpiStmt *stmt, void **bindHandle, const char *name, int status; DPI_OCI_LOAD_SYMBOL("OCIBindByName", dpiOciSymbols.fnBindByName) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnBindByName)(stmt->handle, bindHandle, error->handle, name, nameLength, (dynamicBind) ? NULL : var->buffer.data.asRaw, @@ -719,7 +803,7 @@ int dpiOci__bindByName(dpiStmt *stmt, void **bindHandle, const char *name, (var->isArray) ? var->buffer.maxArraySize : 0, (var->isArray) ? &var->buffer.actualArraySize : NULL, (dynamicBind) ? DPI_OCI_DATA_AT_EXEC : DPI_OCI_DEFAULT); - return dpiError__check(error, status, stmt->conn, "bind by name"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "bind by name"); } @@ -733,6 +817,7 @@ int dpiOci__bindByName2(dpiStmt *stmt, void **bindHandle, const char *name, int status; DPI_OCI_LOAD_SYMBOL("OCIBindByName2", dpiOciSymbols.fnBindByName2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnBindByName2)(stmt->handle, bindHandle, error->handle, name, nameLength, (dynamicBind) ? NULL : var->buffer.data.asRaw, @@ -745,7 +830,7 @@ int dpiOci__bindByName2(dpiStmt *stmt, void **bindHandle, const char *name, (var->isArray) ? var->buffer.maxArraySize : 0, (var->isArray) ? &var->buffer.actualArraySize : NULL, (dynamicBind) ? DPI_OCI_DATA_AT_EXEC : DPI_OCI_DEFAULT); - return dpiError__check(error, status, stmt->conn, "bind by name"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "bind by name"); } @@ -759,6 +844,7 @@ int dpiOci__bindByPos(dpiStmt *stmt, void **bindHandle, uint32_t pos, int status; DPI_OCI_LOAD_SYMBOL("OCIBindByPos", dpiOciSymbols.fnBindByPos) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnBindByPos)(stmt->handle, bindHandle, error->handle, pos, (dynamicBind) ? NULL : var->buffer.data.asRaw, (var->isDynamic) ? INT_MAX : (int32_t) var->sizeInBytes, @@ -770,7 +856,7 @@ int dpiOci__bindByPos(dpiStmt *stmt, void **bindHandle, uint32_t pos, (var->isArray) ? var->buffer.maxArraySize : 0, (var->isArray) ? &var->buffer.actualArraySize : NULL, (dynamicBind) ? DPI_OCI_DATA_AT_EXEC : DPI_OCI_DEFAULT); - return dpiError__check(error, status, stmt->conn, "bind by position"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "bind by position"); } @@ -784,6 +870,7 @@ int dpiOci__bindByPos2(dpiStmt *stmt, void **bindHandle, uint32_t pos, int status; DPI_OCI_LOAD_SYMBOL("OCIBindByPos2", dpiOciSymbols.fnBindByPos2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnBindByPos2)(stmt->handle, bindHandle, error->handle, pos, (dynamicBind) ? NULL : var->buffer.data.asRaw, (var->isDynamic) ? INT_MAX : var->sizeInBytes, @@ -795,7 +882,7 @@ int dpiOci__bindByPos2(dpiStmt *stmt, void **bindHandle, uint32_t pos, (var->isArray) ? var->buffer.maxArraySize : 0, (var->isArray) ? &var->buffer.actualArraySize : NULL, (dynamicBind) ? DPI_OCI_DATA_AT_EXEC : DPI_OCI_DEFAULT); - return dpiError__check(error, status, stmt->conn, "bind by position"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "bind by position"); } @@ -808,10 +895,11 @@ int dpiOci__bindDynamic(dpiVar *var, void *bindHandle, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIBindDynamic", dpiOciSymbols.fnBindDynamic) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnBindDynamic)(bindHandle, error->handle, var, (void*) dpiVar__inBindCallback, var, (void*) dpiVar__outBindCallback); - return dpiError__check(error, status, var->conn, "bind dynamic"); + DPI_OCI_CHECK_AND_RETURN(error, status, var->conn, "bind dynamic"); } @@ -824,10 +912,11 @@ int dpiOci__bindObject(dpiVar *var, void *bindHandle, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIBindObject", dpiOciSymbols.fnBindObject) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnBindObject)(bindHandle, error->handle, var->objectType->tdo, (void**) var->buffer.data.asRaw, 0, var->buffer.objectIndicator, 0); - return dpiError__check(error, status, var->conn, "bind object"); + DPI_OCI_CHECK_AND_RETURN(error, status, var->conn, "bind object"); } @@ -840,8 +929,9 @@ int dpiOci__break(dpiConn *conn, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIBreak", dpiOciSymbols.fnBreak) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnBreak)(conn->handle, error->handle); - return dpiError__check(error, status, conn, "break execution"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "break execution"); } @@ -866,9 +956,10 @@ int dpiOci__collAppend(dpiConn *conn, const void *elem, const void *elemInd, int status; DPI_OCI_LOAD_SYMBOL("OCICollAppend", dpiOciSymbols.fnCollAppend) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnCollAppend)(conn->env->handle, error->handle, elem, elemInd, coll); - return dpiError__check(error, status, conn, "append element"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "append element"); } @@ -882,9 +973,10 @@ int dpiOci__collAssignElem(dpiConn *conn, int32_t index, const void *elem, int status; DPI_OCI_LOAD_SYMBOL("OCICollAssignElem", dpiOciSymbols.fnCollAssignElem) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnCollAssignElem)(conn->env->handle, error->handle, index, elem, elemInd, coll); - return dpiError__check(error, status, conn, "assign element"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "assign element"); } @@ -898,9 +990,10 @@ int dpiOci__collGetElem(dpiConn *conn, void *coll, int32_t index, int *exists, int status; DPI_OCI_LOAD_SYMBOL("OCICollGetElem", dpiOciSymbols.fnCollGetElem) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnCollGetElem)(conn->env->handle, error->handle, coll, index, exists, elem, elemInd); - return dpiError__check(error, status, conn, "get element"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "get element"); } @@ -913,9 +1006,10 @@ int dpiOci__collSize(dpiConn *conn, void *coll, int32_t *size, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCICollSize", dpiOciSymbols.fnCollSize) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnCollSize)(conn->env->handle, error->handle, coll, size); - return dpiError__check(error, status, conn, "get size"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "get size"); } @@ -929,9 +1023,10 @@ int dpiOci__collTrim(dpiConn *conn, uint32_t numToTrim, void *coll, int status; DPI_OCI_LOAD_SYMBOL("OCICollTrim", dpiOciSymbols.fnCollTrim) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnCollTrim)(conn->env->handle, error->handle, (int32_t) numToTrim, coll); - return dpiError__check(error, status, conn, "trim"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "trim"); } @@ -945,11 +1040,12 @@ int dpiOci__contextGetValue(dpiConn *conn, const char *key, uint32_t keyLength, int status; DPI_OCI_LOAD_SYMBOL("OCIContextGetValue", dpiOciSymbols.fnContextGetValue) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnContextGetValue)(conn->sessionHandle, error->handle, key, (uint8_t) keyLength, value); - if (checkError) - return dpiError__check(error, status, conn, "get context value"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "get context value"); } @@ -963,12 +1059,13 @@ int dpiOci__contextSetValue(dpiConn *conn, const char *key, uint32_t keyLength, int status; DPI_OCI_LOAD_SYMBOL("OCIContextSetValue", dpiOciSymbols.fnContextSetValue) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnContextSetValue)(conn->sessionHandle, error->handle, DPI_OCI_DURATION_SESSION, key, (uint8_t) keyLength, value); - if (checkError) - return dpiError__check(error, status, conn, "set context value"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "set context value"); } @@ -985,10 +1082,28 @@ int dpiOci__dateTimeConstruct(void *envHandle, void *handle, int16_t year, DPI_OCI_LOAD_SYMBOL("OCIDateTimeConstruct", dpiOciSymbols.fnDateTimeConstruct) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDateTimeConstruct)(envHandle, error->handle, handle, year, month, day, hour, minute, second, fsecond, tz, tzLength); - return dpiError__check(error, status, NULL, "construct date"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "construct date"); +} + + +//----------------------------------------------------------------------------- +// dpiOci__dateTimeConvert() [INTERNAL] +// Wrapper for OCIDateTimeConvert(). +//----------------------------------------------------------------------------- +int dpiOci__dateTimeConvert(void *envHandle, void *inDate, void *outDate, + dpiError *error) +{ + int status; + + DPI_OCI_LOAD_SYMBOL("OCIDateTimeConvert", dpiOciSymbols.fnDateTimeConvert) + DPI_OCI_ENSURE_ERROR_HANDLE(error) + status = (*dpiOciSymbols.fnDateTimeConvert)(envHandle, error->handle, + inDate, outDate); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "convert date"); } @@ -1002,9 +1117,10 @@ int dpiOci__dateTimeGetDate(void *envHandle, void *handle, int16_t *year, int status; DPI_OCI_LOAD_SYMBOL("OCIDateTimeGetDate", dpiOciSymbols.fnDateTimeGetDate) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDateTimeGetDate)(envHandle, error->handle, handle, year, month, day); - return dpiError__check(error, status, NULL, "get date portion"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get date portion"); } @@ -1018,9 +1134,10 @@ int dpiOci__dateTimeGetTime(void *envHandle, void *handle, uint8_t *hour, int status; DPI_OCI_LOAD_SYMBOL("OCIDateTimeGetTime", dpiOciSymbols.fnDateTimeGetTime) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDateTimeGetTime)(envHandle, error->handle, handle, hour, minute, second, fsecond); - return dpiError__check(error, status, NULL, "get time portion"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get time portion"); } @@ -1035,9 +1152,10 @@ int dpiOci__dateTimeGetTimeZoneOffset(void *envHandle, void *handle, DPI_OCI_LOAD_SYMBOL("OCIDateTimeGetTimeZoneOffset", dpiOciSymbols.fnDateTimeGetTimeZoneOffset) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDateTimeGetTimeZoneOffset)(envHandle, error->handle, handle, tzHourOffset, tzMinuteOffset); - return dpiError__check(error, status, NULL, "get time zone portion"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get time zone portion"); } @@ -1052,9 +1170,10 @@ int dpiOci__dateTimeIntervalAdd(void *envHandle, void *handle, void *interval, DPI_OCI_LOAD_SYMBOL("OCIDateTimeIntervalAdd", dpiOciSymbols.fnDateTimeIntervalAdd) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDateTimeIntervalAdd)(envHandle, error->handle, handle, interval, outHandle); - return dpiError__check(error, status, NULL, "add interval to date"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "add interval to date"); } @@ -1069,9 +1188,10 @@ int dpiOci__dateTimeSubtract(void *envHandle, void *handle1, void *handle2, DPI_OCI_LOAD_SYMBOL("OCIDateTimeSubtract", dpiOciSymbols.fnDateTimeSubtract) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDateTimeSubtract)(envHandle, error->handle, handle1, handle2, interval); - return dpiError__check(error, status, NULL, "subtract date"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "subtract date"); } @@ -1084,9 +1204,10 @@ int dpiOci__dbShutdown(dpiConn *conn, uint32_t mode, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIDBShutdown", dpiOciSymbols.fnDbShutdown) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDbShutdown)(conn->handle, error->handle, NULL, mode); - return dpiError__check(error, status, NULL, "shutdown database"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "shutdown database"); } @@ -1099,9 +1220,10 @@ int dpiOci__dbStartup(dpiConn *conn, uint32_t mode, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIDBStartup", dpiOciSymbols.fnDbStartup) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDbStartup)(conn->handle, error->handle, NULL, DPI_OCI_DEFAULT, mode); - return dpiError__check(error, status, NULL, "startup database"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "startup database"); } @@ -1115,6 +1237,7 @@ int dpiOci__defineByPos(dpiStmt *stmt, void **defineHandle, uint32_t pos, int status; DPI_OCI_LOAD_SYMBOL("OCIDefineByPos", dpiOciSymbols.fnDefineByPos) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDefineByPos)(stmt->handle, defineHandle, error->handle, pos, (var->isDynamic) ? NULL : var->buffer.data.asRaw, @@ -1124,7 +1247,7 @@ int dpiOci__defineByPos(dpiStmt *stmt, void **defineHandle, uint32_t pos, (var->isDynamic) ? NULL : var->buffer.actualLength16, (var->isDynamic) ? NULL : var->buffer.returnCode, (var->isDynamic) ? DPI_OCI_DYNAMIC_FETCH : DPI_OCI_DEFAULT); - return dpiError__check(error, status, stmt->conn, "define"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "define"); } @@ -1138,6 +1261,7 @@ int dpiOci__defineByPos2(dpiStmt *stmt, void **defineHandle, uint32_t pos, int status; DPI_OCI_LOAD_SYMBOL("OCIDefineByPos2", dpiOciSymbols.fnDefineByPos2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDefineByPos2)(stmt->handle, defineHandle, error->handle, pos, (var->isDynamic) ? NULL : var->buffer.data.asRaw, @@ -1147,7 +1271,7 @@ int dpiOci__defineByPos2(dpiStmt *stmt, void **defineHandle, uint32_t pos, (var->isDynamic) ? NULL : var->buffer.actualLength32, (var->isDynamic) ? NULL : var->buffer.returnCode, (var->isDynamic) ? DPI_OCI_DYNAMIC_FETCH : DPI_OCI_DEFAULT); - return dpiError__check(error, status, stmt->conn, "define"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "define"); } @@ -1160,9 +1284,10 @@ int dpiOci__defineDynamic(dpiVar *var, void *defineHandle, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIDefineDynamic", dpiOciSymbols.fnDefineDynamic) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDefineDynamic)(defineHandle, error->handle, var, (void*) dpiVar__defineCallback); - return dpiError__check(error, status, var->conn, "define dynamic"); + DPI_OCI_CHECK_AND_RETURN(error, status, var->conn, "define dynamic"); } @@ -1175,10 +1300,11 @@ int dpiOci__defineObject(dpiVar *var, void *defineHandle, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIDefineObject", dpiOciSymbols.fnDefineObject) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDefineObject)(defineHandle, error->handle, var->objectType->tdo, (void**) var->buffer.data.asRaw, 0, var->buffer.objectIndicator, 0); - return dpiError__check(error, status, var->conn, "define object"); + DPI_OCI_CHECK_AND_RETURN(error, status, var->conn, "define object"); } @@ -1192,9 +1318,10 @@ int dpiOci__describeAny(dpiConn *conn, void *obj, uint32_t objLength, int status; DPI_OCI_LOAD_SYMBOL("OCIDescribeAny", dpiOciSymbols.fnDescribeAny) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnDescribeAny)(conn->handle, error->handle, obj, objLength, objType, 0, DPI_OCI_PTYPE_TYPE, describeHandle); - return dpiError__check(error, status, conn, "describe type"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "describe type"); } @@ -1210,7 +1337,7 @@ int dpiOci__descriptorAlloc(void *envHandle, void **handle, DPI_OCI_LOAD_SYMBOL("OCIDescriptorAlloc", dpiOciSymbols.fnDescriptorAlloc) status = (*dpiOciSymbols.fnDescriptorAlloc)(envHandle, handle, handleType, 0, NULL); - return dpiError__check(error, status, NULL, action); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, action); } @@ -1314,7 +1441,7 @@ int dpiOci__errorGet(void *handle, uint32_t handleType, uint16_t charsetId, // Wrapper for OCI allocation of memory, only used when debugging memory // allocation. //----------------------------------------------------------------------------- -static void dpiOci__freeMem(void *unused, void *ptr) +static void dpiOci__freeMem(UNUSED void *unused, void *ptr) { char message[40]; @@ -1338,7 +1465,7 @@ int dpiOci__handleAlloc(void *envHandle, void **handle, uint32_t handleType, NULL); if (handleType == DPI_OCI_HTYPE_ERROR && status != DPI_OCI_SUCCESS) return dpiError__set(error, action, DPI_ERR_NO_MEMORY); - return dpiError__check(error, status, NULL, action); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, action); } @@ -1372,9 +1499,10 @@ int dpiOci__intervalGetDaySecond(void *envHandle, int32_t *day, int32_t *hour, DPI_OCI_LOAD_SYMBOL("OCIIntervalGetDaySecond", dpiOciSymbols.fnIntervalGetDaySecond) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnIntervalGetDaySecond)(envHandle, error->handle, day, hour, minute, second, fsecond, interval); - return dpiError__check(error, status, NULL, "get interval components"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get interval components"); } @@ -1389,9 +1517,10 @@ int dpiOci__intervalGetYearMonth(void *envHandle, int32_t *year, DPI_OCI_LOAD_SYMBOL("OCIIntervalGetYearMonth", dpiOciSymbols.fnIntervalGetYearMonth) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnIntervalGetYearMonth)(envHandle, error->handle, year, month, interval); - return dpiError__check(error, status, NULL, "get interval components"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get interval components"); } @@ -1407,9 +1536,10 @@ int dpiOci__intervalSetDaySecond(void *envHandle, int32_t day, int32_t hour, DPI_OCI_LOAD_SYMBOL("OCIIntervalSetDaySecond", dpiOciSymbols.fnIntervalSetDaySecond) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnIntervalSetDaySecond)(envHandle, error->handle, day, hour, minute, second, fsecond, interval); - return dpiError__check(error, status, NULL, "set interval components"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "set interval components"); } @@ -1424,9 +1554,10 @@ int dpiOci__intervalSetYearMonth(void *envHandle, int32_t year, int32_t month, DPI_OCI_LOAD_SYMBOL("OCIIntervalSetYearMonth", dpiOciSymbols.fnIntervalSetYearMonth) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnIntervalSetYearMonth)(envHandle, error->handle, year, month, interval); - return dpiError__check(error, status, NULL, "set interval components"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "set interval components"); } @@ -1713,13 +1844,17 @@ static int dpiOci__loadLibValidate(dpiError *error) // determine the OCI client version information if (dpiOci__loadSymbol("OCIClientVersion", (void**) &dpiOciSymbols.fnClientVersion, NULL) < 0) - return dpiError__set(error, "check Oracle Client version", - DPI_ERR_ORACLE_CLIENT_TOO_OLD, 0, 0, 11, 2); + return dpiError__set(error, "load symbol OCIClientVersion", + DPI_ERR_ORACLE_CLIENT_UNSUPPORTED); + memset(&dpiOciLibVersionInfo, 0, sizeof(dpiOciLibVersionInfo)); (*dpiOciSymbols.fnClientVersion)(&dpiOciLibVersionInfo.versionNum, &dpiOciLibVersionInfo.releaseNum, &dpiOciLibVersionInfo.updateNum, &dpiOciLibVersionInfo.portReleaseNum, &dpiOciLibVersionInfo.portUpdateNum); + if (dpiOciLibVersionInfo.versionNum == 0) + return dpiError__set(error, "get OCI client version", + DPI_ERR_ORACLE_CLIENT_UNSUPPORTED); dpiOciLibVersionInfo.fullVersionNum = (uint32_t) DPI_ORACLE_VERSION_TO_NUMBER(dpiOciLibVersionInfo.versionNum, dpiOciLibVersionInfo.releaseNum, @@ -1785,9 +1920,10 @@ int dpiOci__lobClose(dpiLob *lob, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCILobClose", dpiOciSymbols.fnLobClose) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobClose)(lob->conn->handle, error->handle, lob->locator); - return dpiError__check(error, status, lob->conn, "close LOB"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "close LOB"); } @@ -1802,13 +1938,14 @@ int dpiOci__lobCreateTemporary(dpiLob *lob, dpiError *error) DPI_OCI_LOAD_SYMBOL("OCILobCreateTemporary", dpiOciSymbols.fnLobCreateTemporary) + DPI_OCI_ENSURE_ERROR_HANDLE(error) if (lob->type->oracleTypeNum == DPI_ORACLE_TYPE_BLOB) lobType = DPI_OCI_TEMP_BLOB; else lobType = DPI_OCI_TEMP_CLOB; status = (*dpiOciSymbols.fnLobCreateTemporary)(lob->conn->handle, error->handle, lob->locator, DPI_OCI_DEFAULT, lob->type->charsetForm, lobType, 1, DPI_OCI_DURATION_SESSION); - return dpiError__check(error, status, lob->conn, "create temporary LOB"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "create temporary LOB"); } @@ -1821,9 +1958,10 @@ int dpiOci__lobFileExists(dpiLob *lob, int *exists, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCILobFileExists", dpiOciSymbols.fnLobFileExists) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobFileExists)(lob->conn->handle, error->handle, lob->locator, exists); - return dpiError__check(error, status, lob->conn, "get file exists"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "get file exists"); } @@ -1838,9 +1976,10 @@ int dpiOci__lobFileGetName(dpiLob *lob, char *dirAlias, int status; DPI_OCI_LOAD_SYMBOL("OCILobFileGetName", dpiOciSymbols.fnLobFileGetName) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobFileGetName)(lob->env->handle, error->handle, lob->locator, dirAlias, dirAliasLength, name, nameLength); - return dpiError__check(error, status, lob->conn, "get LOB file name"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "get LOB file name"); } @@ -1855,9 +1994,10 @@ int dpiOci__lobFileSetName(dpiLob *lob, const char *dirAlias, int status; DPI_OCI_LOAD_SYMBOL("OCILobFileSetName", dpiOciSymbols.fnLobFileSetName) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobFileSetName)(lob->env->handle, error->handle, &lob->locator, dirAlias, dirAliasLength, name, nameLength); - return dpiError__check(error, status, lob->conn, "set LOB file name"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "set LOB file name"); } @@ -1872,11 +2012,12 @@ int dpiOci__lobFreeTemporary(dpiConn *conn, void *lobLocator, int checkError, DPI_OCI_LOAD_SYMBOL("OCILobFreeTemporary", dpiOciSymbols.fnLobFreeTemporary) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobFreeTemporary)(conn->handle, error->handle, lobLocator); - if (checkError) - return dpiError__check(error, status, conn, "free temporary LOB"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "free temporary LOB"); } @@ -1889,9 +2030,10 @@ int dpiOci__lobGetChunkSize(dpiLob *lob, uint32_t *size, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCILobGetChunkSize", dpiOciSymbols.fnLobGetChunkSize) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobGetChunkSize)(lob->conn->handle, error->handle, lob->locator, size); - return dpiError__check(error, status, lob->conn, "get chunk size"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "get chunk size"); } @@ -1904,9 +2046,10 @@ int dpiOci__lobGetLength2(dpiLob *lob, uint64_t *size, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCILobGetLength2", dpiOciSymbols.fnLobGetLength2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobGetLength2)(lob->conn->handle, error->handle, lob->locator, size); - return dpiError__check(error, status, lob->conn, "get length"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "get length"); } @@ -1919,9 +2062,10 @@ int dpiOci__lobIsOpen(dpiLob *lob, int *isOpen, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCILobIsOpen", dpiOciSymbols.fnLobIsOpen) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobIsOpen)(lob->conn->handle, error->handle, lob->locator, isOpen); - return dpiError__check(error, status, lob->conn, "check is open"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "check is open"); } @@ -1936,11 +2080,12 @@ int dpiOci__lobIsTemporary(dpiLob *lob, int *isTemporary, int checkError, *isTemporary = 0; DPI_OCI_LOAD_SYMBOL("OCILobIsTemporary", dpiOciSymbols.fnLobIsTemporary) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobIsTemporary)(lob->env->handle, error->handle, lob->locator, isTemporary); - if (checkError) - return dpiError__check(error, status, lob->conn, "check is temporary"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "check is temporary"); } @@ -1954,9 +2099,10 @@ int dpiOci__lobLocatorAssign(dpiLob *lob, void **copiedHandle, dpiError *error) DPI_OCI_LOAD_SYMBOL("OCILobLocatorAssign", dpiOciSymbols.fnLobLocatorAssign) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobLocatorAssign)(lob->conn->handle, error->handle, lob->locator, copiedHandle); - return dpiError__check(error, status, lob->conn, "assign locator"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "assign locator"); } @@ -1970,11 +2116,12 @@ int dpiOci__lobOpen(dpiLob *lob, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCILobOpen", dpiOciSymbols.fnLobOpen) + DPI_OCI_ENSURE_ERROR_HANDLE(error) mode = (lob->type->oracleTypeNum == DPI_ORACLE_TYPE_BFILE) ? DPI_OCI_LOB_READONLY : DPI_OCI_LOB_READWRITE; status = (*dpiOciSymbols.fnLobOpen)(lob->conn->handle, error->handle, lob->locator, mode); - return dpiError__check(error, status, lob->conn, "close LOB"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "close LOB"); } @@ -1990,13 +2137,14 @@ int dpiOci__lobRead2(dpiLob *lob, uint64_t offset, uint64_t *amountInBytes, int status; DPI_OCI_LOAD_SYMBOL("OCILobRead2", dpiOciSymbols.fnLobRead2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) charsetId = (lob->type->charsetForm == DPI_SQLCS_NCHAR) ? lob->env->ncharsetId : lob->env->charsetId; status = (*dpiOciSymbols.fnLobRead2)(lob->conn->handle, error->handle, lob->locator, amountInBytes, amountInChars, offset, buffer, bufferLength, DPI_OCI_ONE_PIECE, NULL, NULL, charsetId, lob->type->charsetForm); - return dpiError__check(error, status, lob->conn, "read from LOB"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "read from LOB"); } @@ -2009,11 +2157,12 @@ int dpiOci__lobTrim2(dpiLob *lob, uint64_t newLength, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCILobTrim2", dpiOciSymbols.fnLobTrim2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnLobTrim2)(lob->conn->handle, error->handle, lob->locator, newLength); if (status == DPI_OCI_INVALID_HANDLE) return dpiOci__lobCreateTemporary(lob, error); - return dpiError__check(error, status, lob->conn, "trim LOB"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "trim LOB"); } @@ -2029,13 +2178,14 @@ int dpiOci__lobWrite2(dpiLob *lob, uint64_t offset, const char *value, int status; DPI_OCI_LOAD_SYMBOL("OCILobWrite2", dpiOciSymbols.fnLobWrite2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) charsetId = (lob->type->charsetForm == DPI_SQLCS_NCHAR) ? lob->env->ncharsetId : lob->env->charsetId; status = (*dpiOciSymbols.fnLobWrite2)(lob->conn->handle, error->handle, lob->locator, &lengthInBytes, &lengthInChars, offset, (void*) value, valueLength, DPI_OCI_ONE_PIECE, NULL, NULL, charsetId, lob->type->charsetForm); - return dpiError__check(error, status, lob->conn, "write to LOB"); + DPI_OCI_CHECK_AND_RETURN(error, status, lob->conn, "write to LOB"); } @@ -2050,11 +2200,12 @@ int dpiOci__memoryAlloc(dpiConn *conn, void **ptr, uint32_t size, *ptr = NULL; DPI_OCI_LOAD_SYMBOL("OCIMemoryAlloc", dpiOciSymbols.fnMemoryAlloc) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnMemoryAlloc)(conn->sessionHandle, error->handle, ptr, DPI_OCI_DURATION_SESSION, size, DPI_OCI_MEMORY_CLEARED); - if (checkError) - return dpiError__check(error, status, conn, "allocate memory"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "allocate memory"); } @@ -2065,6 +2216,7 @@ int dpiOci__memoryAlloc(dpiConn *conn, void **ptr, uint32_t size, int dpiOci__memoryFree(dpiConn *conn, void *ptr, dpiError *error) { DPI_OCI_LOAD_SYMBOL("OCIMemoryFree", dpiOciSymbols.fnMemoryFree) + DPI_OCI_ENSURE_ERROR_HANDLE(error) (*dpiOciSymbols.fnMemoryFree)(conn->sessionHandle, error->handle, ptr); return DPI_SUCCESS; } @@ -2083,10 +2235,11 @@ int dpiOci__nlsCharSetConvert(void *envHandle, uint16_t destCharsetId, DPI_OCI_LOAD_SYMBOL("OCINlsCharSetConvert", dpiOciSymbols.fnNlsCharSetConvert) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnNlsCharSetConvert)(envHandle, error->handle, destCharsetId, dest, destLength, sourceCharsetId, source, sourceLength, resultSize); - return dpiError__check(error, status, NULL, "convert text"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "convert text"); } @@ -2169,9 +2322,10 @@ int dpiOci__nlsNumericInfoGet(void *envHandle, int32_t *value, uint16_t item, DPI_OCI_LOAD_SYMBOL("OCINlsNumericInfoGet", dpiOciSymbols.fnNlsNumericInfoGet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnNlsNumericInfoGet)(envHandle, error->handle, value, item); - return dpiError__check(error, status, NULL, "get NLS info"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get NLS info"); } @@ -2185,9 +2339,10 @@ int dpiOci__numberFromInt(const void *value, unsigned int valueLength, int status; DPI_OCI_LOAD_SYMBOL("OCINumberFromInt", dpiOciSymbols.fnNumberFromInt) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnNumberFromInt)(error->handle, value, valueLength, flags, number); - return dpiError__check(error, status, NULL, "number from integer"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "number from integer"); } @@ -2200,9 +2355,10 @@ int dpiOci__numberFromReal(const double value, void *number, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCINumberFromReal", dpiOciSymbols.fnNumberFromReal) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnNumberFromReal)(error->handle, &value, sizeof(double), number); - return dpiError__check(error, status, NULL, "number from real"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "number from real"); } @@ -2216,9 +2372,10 @@ int dpiOci__numberToInt(void *number, void *value, unsigned int valueLength, int status; DPI_OCI_LOAD_SYMBOL("OCINumberToInt", dpiOciSymbols.fnNumberToInt) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnNumberToInt)(error->handle, number, valueLength, flags, value); - return dpiError__check(error, status, NULL, "number to integer"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "number to integer"); } @@ -2231,9 +2388,10 @@ int dpiOci__numberToReal(double *value, void *number, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCINumberToReal", dpiOciSymbols.fnNumberToReal) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnNumberToReal)(error->handle, number, sizeof(double), value); - return dpiError__check(error, status, NULL, "number to real"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "number to real"); } @@ -2247,11 +2405,12 @@ int dpiOci__objectCopy(dpiObject *obj, void *sourceInstance, int status; DPI_OCI_LOAD_SYMBOL("OCIObjectCopy", dpiOciSymbols.fnObjectCopy) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnObjectCopy)(obj->env->handle, error->handle, obj->type->conn->handle, sourceInstance, sourceIndicator, obj->instance, obj->indicator, obj->type->tdo, DPI_OCI_DURATION_SESSION, DPI_OCI_DEFAULT); - return dpiError__check(error, status, obj->type->conn, "copy object"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "copy object"); } @@ -2259,15 +2418,18 @@ int dpiOci__objectCopy(dpiObject *obj, void *sourceInstance, // dpiOci__objectFree() [INTERNAL] // Wrapper for OCIObjectFree(). //----------------------------------------------------------------------------- -int dpiOci__objectFree(dpiObject *obj, int checkError, dpiError *error) +int dpiOci__objectFree(void *envHandle, void *data, int checkError, + dpiError *error) { int status; DPI_OCI_LOAD_SYMBOL("OCIObjectFree", dpiOciSymbols.fnObjectFree) - status = (*dpiOciSymbols.fnObjectFree)(obj->env->handle, error->handle, - obj->instance, DPI_OCI_DEFAULT); - if (checkError && dpiError__check(error, status, obj->type->conn, - "free instance") < 0) { + DPI_OCI_ENSURE_ERROR_HANDLE(error) + status = (*dpiOciSymbols.fnObjectFree)(envHandle, error->handle, data, + DPI_OCI_DEFAULT); + if (checkError && DPI_OCI_ERROR_OCCURRED(status)) { + dpiError__setFromOCI(error, status, NULL, "free instance"); + // during the attempt to free, PL/SQL records fail with error // "ORA-21602: operation does not support the specified typecode", but // a subsequent attempt will yield error "OCI-21500: internal error @@ -2277,13 +2439,6 @@ int dpiOci__objectFree(dpiObject *obj, int checkError, dpiError *error) return DPI_SUCCESS; return DPI_FAILURE; } - if (obj->freeIndicator) { - status = (*dpiOciSymbols.fnObjectFree)(obj->env->handle, error->handle, - obj->indicator, DPI_OCI_DEFAULT); - if (checkError && dpiError__check(error, status, obj->type->conn, - "free indicator") < 0) - return DPI_FAILURE; - } return DPI_SUCCESS; } @@ -2299,11 +2454,12 @@ int dpiOci__objectGetAttr(dpiObject *obj, dpiObjectAttr *attr, int status; DPI_OCI_LOAD_SYMBOL("OCIObjectGetAttr", dpiOciSymbols.fnObjectGetAttr) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnObjectGetAttr)(obj->env->handle, error->handle, obj->instance, obj->indicator, obj->type->tdo, &attr->name, &attr->nameLength, 1, 0, 0, scalarValueIndicator, valueIndicator, value, tdo); - return dpiError__check(error, status, obj->type->conn, "get attribute"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "get attribute"); } @@ -2316,9 +2472,10 @@ int dpiOci__objectGetInd(dpiObject *obj, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIObjectGetInd", dpiOciSymbols.fnObjectGetInd) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnObjectGetInd)(obj->env->handle, error->handle, obj->instance, &obj->indicator); - return dpiError__check(error, status, obj->type->conn, "get indicator"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "get indicator"); } @@ -2331,10 +2488,11 @@ int dpiOci__objectNew(dpiObject *obj, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIObjectNew", dpiOciSymbols.fnObjectNew) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnObjectNew)(obj->env->handle, error->handle, obj->type->conn->handle, obj->type->typeCode, obj->type->tdo, NULL, DPI_OCI_DURATION_SESSION, 1, &obj->instance); - return dpiError__check(error, status, obj->type->conn, "create object"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "create object"); } @@ -2348,10 +2506,11 @@ int dpiOci__objectPin(void *envHandle, void *objRef, void **obj, int status; DPI_OCI_LOAD_SYMBOL("OCIObjectPin", dpiOciSymbols.fnObjectPin) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnObjectPin)(envHandle, error->handle, objRef, NULL, DPI_OCI_PIN_ANY, DPI_OCI_DURATION_SESSION, DPI_OCI_LOCK_NONE, obj); - return dpiError__check(error, status, NULL, "pin reference"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "pin reference"); } @@ -2366,11 +2525,12 @@ int dpiOci__objectSetAttr(dpiObject *obj, dpiObjectAttr *attr, int status; DPI_OCI_LOAD_SYMBOL("OCIObjectSetAttr", dpiOciSymbols.fnObjectSetAttr) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnObjectSetAttr)(obj->env->handle, error->handle, obj->instance, obj->indicator, obj->type->tdo, &attr->name, &attr->nameLength, 1, NULL, 0, scalarValueIndicator, valueIndicator, value); - return dpiError__check(error, status, obj->type->conn, "set attribute"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "set attribute"); } @@ -2386,10 +2546,11 @@ int dpiOci__passwordChange(dpiConn *conn, const char *userName, int status; DPI_OCI_LOAD_SYMBOL("OCIPasswordChange", dpiOciSymbols.fnPasswordChange) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnPasswordChange)(conn->handle, error->handle, userName, userNameLength, oldPassword, oldPasswordLength, newPassword, newPasswordLength, mode); - return dpiError__check(error, status, conn, "change password"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "change password"); } @@ -2403,9 +2564,10 @@ int dpiOci__paramGet(const void *handle, uint32_t handleType, void **parameter, int status; DPI_OCI_LOAD_SYMBOL("OCIParamGet", dpiOciSymbols.fnParamGet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnParamGet)(handle, handleType, error->handle, parameter, pos); - return dpiError__check(error, status, NULL, action); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, action); } @@ -2418,17 +2580,21 @@ int dpiOci__ping(dpiConn *conn, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIPing", dpiOciSymbols.fnPing) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnPing)(conn->handle, error->handle, DPI_OCI_DEFAULT); - status = dpiError__check(error, status, conn, "ping"); + if (DPI_OCI_ERROR_OCCURRED(status)) { + dpiError__setFromOCI(error, status, conn, "ping"); - // attempting to ping a database earlier than 10g will result in error - // ORA-1010: invalid OCI operation, but that implies a successful ping - // so ignore that error and treat it as a successful operation - if (status < 0 && error->buffer->code == 1010) - return DPI_SUCCESS; + // attempting to ping a database earlier than 10g will result in error + // ORA-1010: invalid OCI operation, but that implies a successful ping + // so ignore that error and treat it as a successful operation + if (error->buffer->code == 1010) + return DPI_SUCCESS; + return DPI_FAILURE; + } - return status; + return DPI_SUCCESS; } @@ -2442,9 +2608,10 @@ int dpiOci__rawAssignBytes(void *envHandle, const char *value, int status; DPI_OCI_LOAD_SYMBOL("OCIRawAssignBytes", dpiOciSymbols.fnRawAssignBytes) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnRawAssignBytes)(envHandle, error->handle, value, valueLength, handle); - return dpiError__check(error, status, NULL, "assign bytes to raw"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "assign bytes to raw"); } @@ -2472,9 +2639,10 @@ int dpiOci__rawResize(void *envHandle, void **handle, uint32_t newSize, int status; DPI_OCI_LOAD_SYMBOL("OCIRawResize", dpiOciSymbols.fnRawResize) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnRawResize)(envHandle, error->handle, newSize, handle); - return dpiError__check(error, status, NULL, "resize raw"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "resize raw"); } @@ -2497,7 +2665,7 @@ int dpiOci__rawSize(void *envHandle, void *handle, uint32_t *size) // Wrapper for OCI allocation of memory, only used when debugging memory // allocation. //----------------------------------------------------------------------------- -static void *dpiOci__reallocMem(void *unused, void *ptr, size_t newSize) +static void *dpiOci__reallocMem(UNUSED void *unused, void *ptr, size_t newSize) { char message[80]; void *newPtr; @@ -2520,12 +2688,13 @@ int dpiOci__rowidToChar(dpiRowid *rowid, char *buffer, uint16_t *bufferSize, int status; DPI_OCI_LOAD_SYMBOL("OCIRowidToChar", dpiOciSymbols.fnRowidToChar) + DPI_OCI_ENSURE_ERROR_HANDLE(error) origSize = *bufferSize; status = (*dpiOciSymbols.fnRowidToChar)(rowid->handle, buffer, bufferSize, error->handle); if (origSize == 0) return DPI_SUCCESS; - return dpiError__check(error, status, NULL, "get rowid as string"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get rowid as string"); } @@ -2539,9 +2708,10 @@ int dpiOci__serverAttach(dpiConn *conn, const char *connectString, int status; DPI_OCI_LOAD_SYMBOL("OCIServerAttach", dpiOciSymbols.fnServerAttach) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnServerAttach)(conn->serverHandle, error->handle, connectString, (int32_t) connectStringLength, DPI_OCI_DEFAULT); - return dpiError__check(error, status, conn, "server attach"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "server attach"); } @@ -2554,11 +2724,12 @@ int dpiOci__serverDetach(dpiConn *conn, int checkError, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCIServerDetach", dpiOciSymbols.fnServerDetach) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnServerDetach)(conn->serverHandle, error->handle, DPI_OCI_DEFAULT); - if (checkError) - return dpiError__check(error, status, conn, "detatch from server"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "detatch from server"); } @@ -2571,6 +2742,7 @@ int dpiOci__serverRelease(dpiConn *conn, char *buffer, uint32_t bufferSize, { int status; + DPI_OCI_ENSURE_ERROR_HANDLE(error) if (conn->env->versionInfo->versionNum < 18) { DPI_OCI_LOAD_SYMBOL("OCIServerRelease", dpiOciSymbols.fnServerRelease) status = (*dpiOciSymbols.fnServerRelease)(conn->handle, error->handle, @@ -2582,7 +2754,7 @@ int dpiOci__serverRelease(dpiConn *conn, char *buffer, uint32_t bufferSize, buffer, bufferSize, DPI_OCI_HTYPE_SVCCTX, version, DPI_OCI_DEFAULT); } - return dpiError__check(error, status, conn, "get server version"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "get server version"); } @@ -2596,9 +2768,10 @@ int dpiOci__sessionBegin(dpiConn *conn, uint32_t credentialType, int status; DPI_OCI_LOAD_SYMBOL("OCISessionBegin", dpiOciSymbols.fnSessionBegin) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSessionBegin)(conn->handle, error->handle, conn->sessionHandle, credentialType, mode); - return dpiError__check(error, status, conn, "begin session"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "begin session"); } @@ -2611,11 +2784,12 @@ int dpiOci__sessionEnd(dpiConn *conn, int checkError, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCISessionEnd", dpiOciSymbols.fnSessionEnd) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSessionEnd)(conn->handle, error->handle, conn->sessionHandle, DPI_OCI_DEFAULT); - if (checkError) - return dpiError__check(error, status, conn, "end session"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "end session"); } @@ -2631,10 +2805,11 @@ int dpiOci__sessionGet(void *envHandle, void **handle, void *authInfo, int status; DPI_OCI_LOAD_SYMBOL("OCISessionGet", dpiOciSymbols.fnSessionGet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSessionGet)(envHandle, error->handle, handle, authInfo, connectString, connectStringLength, tag, tagLength, outTag, outTagLength, found, mode); - return dpiError__check(error, status, NULL, "get session"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "get session"); } @@ -2652,12 +2827,13 @@ int dpiOci__sessionPoolCreate(dpiPool *pool, const char *connectString, DPI_OCI_LOAD_SYMBOL("OCISessionPoolCreate", dpiOciSymbols.fnSessionPoolCreate) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSessionPoolCreate)(pool->env->handle, error->handle, pool->handle, (char**) &pool->name, &pool->nameLength, connectString, connectStringLength, minSessions, maxSessions, sessionIncrement, userName, userNameLength, password, passwordLength, mode); - return dpiError__check(error, status, NULL, "create pool"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "create pool"); } @@ -2673,6 +2849,7 @@ int dpiOci__sessionPoolDestroy(dpiPool *pool, uint32_t mode, int checkError, DPI_OCI_LOAD_SYMBOL("OCISessionPoolDestroy", dpiOciSymbols.fnSessionPoolDestroy) + DPI_OCI_ENSURE_ERROR_HANDLE(error) // clear the pool handle immediately so that no further attempts are made // to use the pool while the pool is being closed; if the pool close fails, @@ -2681,10 +2858,9 @@ int dpiOci__sessionPoolDestroy(dpiPool *pool, uint32_t mode, int checkError, pool->handle = NULL; status = (*dpiOciSymbols.fnSessionPoolDestroy)(handle, error->handle, mode); - if (checkError && - dpiError__check(error, status, NULL, "destroy pool") < 0) { + if (checkError && DPI_OCI_ERROR_OCCURRED(status)) { pool->handle = handle; - return DPI_FAILURE; + return dpiError__setFromOCI(error, status, NULL, "destroy pool"); } dpiOci__handleFree(handle, DPI_OCI_HTYPE_SPOOL); return DPI_SUCCESS; @@ -2701,11 +2877,12 @@ int dpiOci__sessionRelease(dpiConn *conn, const char *tag, uint32_t tagLength, int status; DPI_OCI_LOAD_SYMBOL("OCISessionRelease", dpiOciSymbols.fnSessionRelease) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSessionRelease)(conn->handle, error->handle, tag, tagLength, mode); - if (checkError) - return dpiError__check(error, status, conn, "release session"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "release session"); } @@ -2720,9 +2897,51 @@ int dpiOci__shardingKeyColumnAdd(void *shardingKey, void *col, uint32_t colLen, DPI_OCI_LOAD_SYMBOL("OCIShardingKeyColumnAdd", dpiOciSymbols.fnShardingKeyColumnAdd) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnShardingKeyColumnAdd)(shardingKey, error->handle, col, colLen, colType, DPI_OCI_DEFAULT); - return dpiError__check(error, status, NULL, "add sharding column"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "add sharding column"); +} + + +//----------------------------------------------------------------------------- +// dpiOci__sodaBulkInsert() [INTERNAL] +// Wrapper for OCISodaBulkInsert(). +//----------------------------------------------------------------------------- +int dpiOci__sodaBulkInsert(dpiSodaColl *coll, void **documents, + uint32_t numDocuments, void *outputOptions, uint32_t mode, + dpiError *error) +{ + int status; + + DPI_OCI_LOAD_SYMBOL("OCISodaBulkInsert", dpiOciSymbols.fnSodaBulkInsert) + DPI_OCI_ENSURE_ERROR_HANDLE(error) + status = (*dpiOciSymbols.fnSodaBulkInsert)(coll->db->conn->handle, + coll->handle, documents, numDocuments, outputOptions, + error->handle, mode); + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, + "insert multiple documents"); +} + + +//----------------------------------------------------------------------------- +// dpiOci__sodaBulkInsertAndGet() [INTERNAL] +// Wrapper for OCISodaBulkInsert(). +//----------------------------------------------------------------------------- +int dpiOci__sodaBulkInsertAndGet(dpiSodaColl *coll, void **documents, + uint32_t numDocuments, void *outputOptions, uint32_t mode, + dpiError *error) +{ + int status; + + DPI_OCI_LOAD_SYMBOL("OCISodaBulkInsertAndGet", + dpiOciSymbols.fnSodaBulkInsertAndGet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) + status = (*dpiOciSymbols.fnSodaBulkInsertAndGet)(coll->db->conn->handle, + coll->handle, documents, numDocuments, outputOptions, + error->handle, mode); + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, + "insert (and get) multiple documents"); } @@ -2738,10 +2957,12 @@ int dpiOci__sodaCollCreateWithMetadata(dpiSodaDb *db, const char *name, DPI_OCI_LOAD_SYMBOL("OCISodaCollCreateWithMetadata", dpiOciSymbols.fnSodaCollCreateWithMetadata) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaCollCreateWithMetadata)(db->conn->handle, name, nameLength, metadata, metadataLength, handle, error->handle, mode); - return dpiError__check(error, status, db->conn, "create SODA collection"); + DPI_OCI_CHECK_AND_RETURN(error, status, db->conn, + "create SODA collection"); } @@ -2755,9 +2976,10 @@ int dpiOci__sodaCollDrop(dpiSodaColl *coll, int *isDropped, uint32_t mode, int status; DPI_OCI_LOAD_SYMBOL("OCISodaCollDrop", dpiOciSymbols.fnSodaCollDrop) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaCollDrop)(coll->db->conn->handle, coll->handle, isDropped, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "drop SODA collection"); } @@ -2772,13 +2994,14 @@ int dpiOci__sodaCollGetNext(dpiConn *conn, void *cursorHandle, int status; DPI_OCI_LOAD_SYMBOL("OCISodaCollGetNext", dpiOciSymbols.fnSodaCollGetNext) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaCollGetNext)(conn->handle, cursorHandle, collectionHandle, error->handle, mode); if (status == DPI_OCI_NO_DATA) { *collectionHandle = NULL; return DPI_SUCCESS; } - return dpiError__check(error, status, conn, "get next collection"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "get next collection"); } @@ -2793,9 +3016,10 @@ int dpiOci__sodaCollList(dpiSodaDb *db, const char *startingName, int status; DPI_OCI_LOAD_SYMBOL("OCISodaCollList", dpiOciSymbols.fnSodaCollList) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaCollList)(db->conn->handle, startingName, startingNameLength, handle, error->handle, mode); - return dpiError__check(error, status, db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, db->conn, "get SODA collection cursor"); } @@ -2810,9 +3034,10 @@ int dpiOci__sodaCollOpen(dpiSodaDb *db, const char *name, uint32_t nameLength, int status; DPI_OCI_LOAD_SYMBOL("OCISodaCollOpen", dpiOciSymbols.fnSodaCollOpen) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaCollOpen)(db->conn->handle, name, nameLength, handle, error->handle, mode); - return dpiError__check(error, status, db->conn, "open SODA collection"); + DPI_OCI_CHECK_AND_RETURN(error, status, db->conn, "open SODA collection"); } @@ -2827,9 +3052,11 @@ int dpiOci__sodaDataGuideGet(dpiSodaColl *coll, void **handle, uint32_t mode, DPI_OCI_LOAD_SYMBOL("OCISodaDataGuideGet", dpiOciSymbols.fnSodaDataGuideGet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaDataGuideGet)(coll->db->conn->handle, coll->handle, DPI_OCI_DEFAULT, handle, error->handle, mode); - if (dpiError__check(error, status, coll->db->conn, "get data guide") < 0) { + if (DPI_OCI_ERROR_OCCURRED(status)) { + dpiError__setFromOCI(error, status, coll->db->conn, "get data guide"); if (error->buffer->code != 24801) return DPI_FAILURE; *handle = NULL; @@ -2848,9 +3075,10 @@ int dpiOci__sodaDocCount(dpiSodaColl *coll, void *options, uint32_t mode, int status; DPI_OCI_LOAD_SYMBOL("OCISodaDocCount", dpiOciSymbols.fnSodaDocCount) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaDocCount)(coll->db->conn->handle, coll->handle, options, count, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "get document count"); } @@ -2865,13 +3093,14 @@ int dpiOci__sodaDocGetNext(dpiSodaDocCursor *cursor, void **handle, int status; DPI_OCI_LOAD_SYMBOL("OCISodaDocGetNext", dpiOciSymbols.fnSodaDocGetNext) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaDocGetNext)(cursor->coll->db->conn->handle, cursor->handle, handle, error->handle, mode); if (status == DPI_OCI_NO_DATA) { *handle = NULL; return DPI_SUCCESS; } - return dpiError__check(error, status, cursor->coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, cursor->coll->db->conn, "get next document"); } @@ -2886,13 +3115,14 @@ int dpiOci__sodaFind(dpiSodaColl *coll, const void *options, uint32_t flags, int status; DPI_OCI_LOAD_SYMBOL("OCISodaFind", dpiOciSymbols.fnSodaFind) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaFind)(coll->db->conn->handle, coll->handle, options, flags, handle, error->handle, mode); if (status == DPI_OCI_NO_DATA) { *handle = NULL; return DPI_SUCCESS; } - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "find SODA documents"); } @@ -2907,13 +3137,15 @@ int dpiOci__sodaFindOne(dpiSodaColl *coll, const void *options, uint32_t flags, int status; DPI_OCI_LOAD_SYMBOL("OCISodaFindOne", dpiOciSymbols.fnSodaFindOne) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaFindOne)(coll->db->conn->handle, coll->handle, options, flags, handle, error->handle, mode); if (status == DPI_OCI_NO_DATA) { *handle = NULL; return DPI_SUCCESS; } - return dpiError__check(error, status, coll->db->conn, "get SODA document"); + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, + "get SODA document"); } @@ -2927,9 +3159,10 @@ int dpiOci__sodaIndexCreate(dpiSodaColl *coll, const char *indexSpec, int status; DPI_OCI_LOAD_SYMBOL("OCISodaIndexCreate", dpiOciSymbols.fnSodaIndexCreate) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaIndexCreate)(coll->db->conn->handle, coll->handle, indexSpec, indexSpecLength, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, "create index"); + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "create index"); } @@ -2943,9 +3176,10 @@ int dpiOci__sodaIndexDrop(dpiSodaColl *coll, const char *name, int status; DPI_OCI_LOAD_SYMBOL("OCISodaIndexDrop", dpiOciSymbols.fnSodaIndexDrop) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaIndexDrop)(coll->db->conn->handle, name, nameLength, isDropped, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, "drop index"); + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "drop index"); } @@ -2959,9 +3193,10 @@ int dpiOci__sodaInsert(dpiSodaColl *coll, void *handle, uint32_t mode, int status; DPI_OCI_LOAD_SYMBOL("OCISodaInsert", dpiOciSymbols.fnSodaInsert) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaInsert)(coll->db->conn->handle, coll->handle, handle, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "insert SODA document"); } @@ -2977,9 +3212,10 @@ int dpiOci__sodaInsertAndGet(dpiSodaColl *coll, void **handle, uint32_t mode, DPI_OCI_LOAD_SYMBOL("OCISodaInsertAndGet", dpiOciSymbols.fnSodaInsertAndGet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaInsertAndGet)(coll->db->conn->handle, coll->handle, handle, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "insert and get SODA document"); } @@ -2994,10 +3230,12 @@ int dpiOci__sodaOperKeysSet(const dpiSodaOperOptions *options, void *handle, int status; DPI_OCI_LOAD_SYMBOL("OCISodaOperKeysSet", dpiOciSymbols.fnSodaOperKeysSet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaOperKeysSet)(handle, options->keys, options->keyLengths, options->numKeys, error->handle, DPI_OCI_DEFAULT); - return dpiError__check(error, status, NULL, "set operation options keys"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, + "set operation options keys"); } @@ -3011,9 +3249,10 @@ int dpiOci__sodaRemove(dpiSodaColl *coll, void *options, uint32_t mode, int status; DPI_OCI_LOAD_SYMBOL("OCISodaRemove", dpiOciSymbols.fnSodaRemove) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaRemove)(coll->db->conn->handle, coll->handle, options, count, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "remove documents from SODA collection"); } @@ -3028,9 +3267,10 @@ int dpiOci__sodaReplOne(dpiSodaColl *coll, const void *options, void *handle, int status; DPI_OCI_LOAD_SYMBOL("OCISodaReplOne", dpiOciSymbols.fnSodaReplOne) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaReplOne)(coll->db->conn->handle, coll->handle, options, handle, isReplaced, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "replace SODA document"); } @@ -3046,9 +3286,10 @@ int dpiOci__sodaReplOneAndGet(dpiSodaColl *coll, const void *options, DPI_OCI_LOAD_SYMBOL("OCISodaReplOneAndGet", dpiOciSymbols.fnSodaReplOneAndGet) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSodaReplOneAndGet)(coll->db->conn->handle, coll->handle, options, handle, isReplaced, error->handle, mode); - return dpiError__check(error, status, coll->db->conn, + DPI_OCI_CHECK_AND_RETURN(error, status, coll->db->conn, "replace and get SODA document"); } @@ -3063,9 +3304,10 @@ int dpiOci__stmtExecute(dpiStmt *stmt, uint32_t numIters, uint32_t mode, int status; DPI_OCI_LOAD_SYMBOL("OCIStmtExecute", dpiOciSymbols.fnStmtExecute) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStmtExecute)(stmt->conn->handle, stmt->handle, error->handle, numIters, 0, 0, 0, mode); - return dpiError__check(error, status, stmt->conn, "execute"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "execute"); } @@ -3079,13 +3321,16 @@ int dpiOci__stmtFetch2(dpiStmt *stmt, uint32_t numRows, uint16_t fetchMode, int status; DPI_OCI_LOAD_SYMBOL("OCIStmtFetch2", dpiOciSymbols.fnStmtFetch2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStmtFetch2)(stmt->handle, error->handle, numRows, fetchMode, offset, DPI_OCI_DEFAULT); - if (status == DPI_OCI_NO_DATA || fetchMode == DPI_MODE_FETCH_LAST) + if (status == DPI_OCI_NO_DATA || fetchMode == DPI_MODE_FETCH_LAST) { stmt->hasRowsToFetch = 0; - else if (dpiError__check(error, status, stmt->conn, "fetch") < 0) - return DPI_FAILURE; - else stmt->hasRowsToFetch = 1; + } else if (DPI_OCI_ERROR_OCCURRED(status)) { + return dpiError__setFromOCI(error, status, stmt->conn, "fetch"); + } else { + stmt->hasRowsToFetch = 1; + } return DPI_SUCCESS; } @@ -3102,6 +3347,7 @@ int dpiOci__stmtGetBindInfo(dpiStmt *stmt, uint32_t size, uint32_t startLoc, int status; DPI_OCI_LOAD_SYMBOL("OCIStmtGetBindInfo", dpiOciSymbols.fnStmtGetBindInfo) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStmtGetBindInfo)(stmt->handle, error->handle, size, startLoc, numFound, names, nameLengths, indNames, indNameLengths, isDuplicate, bindHandles); @@ -3109,7 +3355,7 @@ int dpiOci__stmtGetBindInfo(dpiStmt *stmt, uint32_t size, uint32_t startLoc, *numFound = 0; return DPI_SUCCESS; } - return dpiError__check(error, status, stmt->conn, "get bind info"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "get bind info"); } @@ -3124,13 +3370,14 @@ int dpiOci__stmtGetNextResult(dpiStmt *stmt, void **handle, dpiError *error) DPI_OCI_LOAD_SYMBOL("OCIStmtGetNextResult", dpiOciSymbols.fnStmtGetNextResult) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStmtGetNextResult)(stmt->handle, error->handle, handle, &returnType, DPI_OCI_DEFAULT); if (status == DPI_OCI_NO_DATA) { *handle = NULL; return DPI_SUCCESS; } - return dpiError__check(error, status, stmt->conn, "get next result"); + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "get next result"); } @@ -3144,12 +3391,13 @@ int dpiOci__stmtPrepare2(dpiStmt *stmt, const char *sql, uint32_t sqlLength, int status; DPI_OCI_LOAD_SYMBOL("OCIStmtPrepare2", dpiOciSymbols.fnStmtPrepare2) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStmtPrepare2)(stmt->conn->handle, &stmt->handle, error->handle, sql, sqlLength, tag, tagLength, DPI_OCI_NTV_SYNTAX, DPI_OCI_DEFAULT); - if (dpiError__check(error, status, stmt->conn, "prepare SQL") < 0) { + if (DPI_OCI_ERROR_OCCURRED(status)) { stmt->handle = NULL; - return DPI_FAILURE; + return dpiError__setFromOCI(error, status, stmt->conn, "prepare SQL"); } return DPI_SUCCESS; @@ -3178,11 +3426,12 @@ int dpiOci__stmtRelease(dpiStmt *stmt, const char *tag, uint32_t tagLength, } DPI_OCI_LOAD_SYMBOL("OCIStmtRelease", dpiOciSymbols.fnStmtRelease) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStmtRelease)(stmt->handle, error->handle, tag, tagLength, mode); - if (checkError) - return dpiError__check(error, status, stmt->conn, "release statement"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, stmt->conn, "release statement"); } @@ -3197,9 +3446,10 @@ int dpiOci__stringAssignText(void *envHandle, const char *value, DPI_OCI_LOAD_SYMBOL("OCIStringAssignText", dpiOciSymbols.fnStringAssignText) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStringAssignText)(envHandle, error->handle, value, valueLength, handle); - return dpiError__check(error, status, NULL, "assign to string"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "assign to string"); } @@ -3227,9 +3477,10 @@ int dpiOci__stringResize(void *envHandle, void **handle, uint32_t newSize, int status; DPI_OCI_LOAD_SYMBOL("OCIStringResize", dpiOciSymbols.fnStringResize) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnStringResize)(envHandle, error->handle, newSize, handle); - return dpiError__check(error, status, NULL, "resize string"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "resize string"); } @@ -3251,15 +3502,17 @@ int dpiOci__stringSize(void *envHandle, void *handle, uint32_t *size) // dpiOci__subscriptionRegister() [INTERNAL] // Wrapper for OCISubscriptionRegister(). //----------------------------------------------------------------------------- -int dpiOci__subscriptionRegister(dpiConn *conn, void **handle, dpiError *error) +int dpiOci__subscriptionRegister(dpiConn *conn, void **handle, uint32_t mode, + dpiError *error) { int status; DPI_OCI_LOAD_SYMBOL("OCISubscriptionRegister", dpiOciSymbols.fnSubscriptionRegister) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnSubscriptionRegister)(conn->handle, handle, 1, - error->handle, DPI_OCI_DEFAULT); - return dpiError__check(error, status, conn, "register"); + error->handle, mode); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "register"); } @@ -3270,13 +3523,17 @@ int dpiOci__subscriptionRegister(dpiConn *conn, void **handle, dpiError *error) int dpiOci__subscriptionUnRegister(dpiConn *conn, dpiSubscr *subscr, dpiError *error) { + uint32_t mode; int status; DPI_OCI_LOAD_SYMBOL("OCISubscriptionUnRegister", dpiOciSymbols.fnSubscriptionUnRegister) + DPI_OCI_ENSURE_ERROR_HANDLE(error) + mode = (subscr->clientInitiated) ? DPI_OCI_SECURE_NOTIFICATION : + DPI_OCI_DEFAULT; status = (*dpiOciSymbols.fnSubscriptionUnRegister)(conn->handle, - subscr->handle, error->handle, DPI_OCI_DEFAULT); - return dpiError__check(error, status, conn, "unregister"); + subscr->handle, error->handle, mode); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "unregister"); } @@ -3289,9 +3546,10 @@ int dpiOci__tableDelete(dpiObject *obj, int32_t index, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITableDelete", dpiOciSymbols.fnTableDelete) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTableDelete)(obj->env->handle, error->handle, index, obj->instance); - return dpiError__check(error, status, obj->type->conn, "delete element"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "delete element"); } @@ -3305,9 +3563,11 @@ int dpiOci__tableExists(dpiObject *obj, int32_t index, int *exists, int status; DPI_OCI_LOAD_SYMBOL("OCITableExists", dpiOciSymbols.fnTableExists) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTableExists)(obj->env->handle, error->handle, obj->instance, index, exists); - return dpiError__check(error, status, obj->type->conn, "get index exists"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, + "get index exists"); } @@ -3320,9 +3580,11 @@ int dpiOci__tableFirst(dpiObject *obj, int32_t *index, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITableFirst", dpiOciSymbols.fnTableFirst) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTableFirst)(obj->env->handle, error->handle, obj->instance, index); - return dpiError__check(error, status, obj->type->conn, "get first index"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, + "get first index"); } @@ -3335,9 +3597,10 @@ int dpiOci__tableLast(dpiObject *obj, int32_t *index, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITableLast", dpiOciSymbols.fnTableLast) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTableLast)(obj->env->handle, error->handle, obj->instance, index); - return dpiError__check(error, status, obj->type->conn, "get last index"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "get last index"); } @@ -3351,9 +3614,10 @@ int dpiOci__tableNext(dpiObject *obj, int32_t index, int32_t *nextIndex, int status; DPI_OCI_LOAD_SYMBOL("OCITableNext", dpiOciSymbols.fnTableNext) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTableNext)(obj->env->handle, error->handle, index, obj->instance, nextIndex, exists); - return dpiError__check(error, status, obj->type->conn, "get next index"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "get next index"); } @@ -3367,9 +3631,10 @@ int dpiOci__tablePrev(dpiObject *obj, int32_t index, int32_t *prevIndex, int status; DPI_OCI_LOAD_SYMBOL("OCITablePrev", dpiOciSymbols.fnTablePrev) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTablePrev)(obj->env->handle, error->handle, index, obj->instance, prevIndex, exists); - return dpiError__check(error, status, obj->type->conn, "get prev index"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "get prev index"); } @@ -3382,9 +3647,10 @@ int dpiOci__tableSize(dpiObject *obj, int32_t *size, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITableSize", dpiOciSymbols.fnTableSize) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTableSize)(obj->env->handle, error->handle, obj->instance, size); - return dpiError__check(error, status, obj->type->conn, "get size"); + DPI_OCI_CHECK_AND_RETURN(error, status, obj->type->conn, "get size"); } @@ -3431,7 +3697,7 @@ int dpiOci__threadKeyInit(void *envHandle, void *errorHandle, void **key, DPI_OCI_LOAD_SYMBOL("OCIThreadKeyInit", dpiOciSymbols.fnThreadKeyInit) status = (*dpiOciSymbols.fnThreadKeyInit)(envHandle, errorHandle, key, destroyFunc); - return dpiError__check(error, status, NULL, "initialize thread key"); + DPI_OCI_CHECK_AND_RETURN(error, status, NULL, "initialize thread key"); } @@ -3462,9 +3728,10 @@ int dpiOci__transCommit(dpiConn *conn, uint32_t flags, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITransCommit", dpiOciSymbols.fnTransCommit) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTransCommit)(conn->handle, error->handle, flags); - return dpiError__check(error, status, conn, "commit"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "commit"); } @@ -3477,10 +3744,11 @@ int dpiOci__transPrepare(dpiConn *conn, int *commitNeeded, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITransPrepare", dpiOciSymbols.fnTransPrepare) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTransPrepare)(conn->handle, error->handle, DPI_OCI_DEFAULT); *commitNeeded = (status == DPI_OCI_SUCCESS); - return dpiError__check(error, status, conn, "prepare transaction"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "prepare transaction"); } @@ -3493,11 +3761,12 @@ int dpiOci__transRollback(dpiConn *conn, int checkError, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITransRollback", dpiOciSymbols.fnTransRollback) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTransRollback)(conn->handle, error->handle, DPI_OCI_DEFAULT); - if (checkError) - return dpiError__check(error, status, conn, "rollback"); - return DPI_SUCCESS; + if (!checkError) + return DPI_SUCCESS; + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "rollback"); } @@ -3510,9 +3779,29 @@ int dpiOci__transStart(dpiConn *conn, dpiError *error) int status; DPI_OCI_LOAD_SYMBOL("OCITransStart", dpiOciSymbols.fnTransStart) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTransStart)(conn->handle, error->handle, 0, DPI_OCI_TRANS_NEW); - return dpiError__check(error, status, conn, "start transaction"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "start transaction"); +} + + +//----------------------------------------------------------------------------- +// dpiOci__typeByName() [INTERNAL] +// Wrapper for OCITypeByName(). +//----------------------------------------------------------------------------- +int dpiOci__typeByName(dpiConn *conn, const char *schema, + uint32_t schemaLength, const char *name, uint32_t nameLength, + void **tdo, dpiError *error) +{ + int status; + + DPI_OCI_LOAD_SYMBOL("OCITypeByName", dpiOciSymbols.fnTypeByName) + DPI_OCI_ENSURE_ERROR_HANDLE(error) + status = (*dpiOciSymbols.fnTypeByName)(conn->env->handle, error->handle, + conn->handle, schema, schemaLength, name, nameLength, NULL, 0, + DPI_OCI_DURATION_SESSION, DPI_OCI_TYPEGET_ALL, tdo); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "get type by name"); } @@ -3526,9 +3815,9 @@ int dpiOci__typeByFullName(dpiConn *conn, const char *name, int status; DPI_OCI_LOAD_SYMBOL("OCITypeByFullName", dpiOciSymbols.fnTypeByFullName) + DPI_OCI_ENSURE_ERROR_HANDLE(error) status = (*dpiOciSymbols.fnTypeByFullName)(conn->env->handle, error->handle, conn->handle, name, nameLength, NULL, 0, DPI_OCI_DURATION_SESSION, DPI_OCI_TYPEGET_ALL, tdo); - return dpiError__check(error, status, conn, "get type by full name"); + DPI_OCI_CHECK_AND_RETURN(error, status, conn, "get type by full name"); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiOracleType.c b/vendor/github.com/godror/godror/odpi/src/dpiOracleType.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiOracleType.c rename to vendor/github.com/godror/godror/odpi/src/dpiOracleType.c index 7397d8f52535..ee307a7d9958 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiOracleType.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiOracleType.c @@ -298,12 +298,14 @@ static dpiOracleTypeNum dpiOracleType__convertFromOracle(uint16_t typeCode, if (charsetForm == DPI_SQLCS_NCHAR) return DPI_ORACLE_TYPE_NVARCHAR; return DPI_ORACLE_TYPE_VARCHAR; + case DPI_SQLT_INT: case DPI_SQLT_FLT: case DPI_SQLT_NUM: case DPI_SQLT_PDN: case DPI_SQLT_VNU: case DPI_SQLT_BFLOAT: case DPI_SQLT_BDOUBLE: + case DPI_OCI_TYPECODE_SMALLINT: return DPI_ORACLE_TYPE_NUMBER; case DPI_SQLT_DAT: case DPI_SQLT_ODT: @@ -315,8 +317,8 @@ static dpiOracleTypeNum dpiOracleType__convertFromOracle(uint16_t typeCode, if (charsetForm == DPI_SQLCS_NCHAR) return DPI_ORACLE_TYPE_NCHAR; return DPI_ORACLE_TYPE_CHAR; - case DPI_SQLT_INT: - case DPI_OCI_TYPECODE_SMALLINT: + case DPI_OCI_TYPECODE_BINARY_INTEGER: + case DPI_OCI_TYPECODE_PLS_INTEGER: return DPI_ORACLE_TYPE_NATIVE_INT; case DPI_SQLT_IBFLOAT: return DPI_ORACLE_TYPE_NATIVE_FLOAT; @@ -344,6 +346,7 @@ static dpiOracleTypeNum dpiOracleType__convertFromOracle(uint16_t typeCode, case DPI_SQLT_BFILE: return DPI_ORACLE_TYPE_BFILE; case DPI_SQLT_RDD: + case DPI_OCI_TYPECODE_ROWID: return DPI_ORACLE_TYPE_ROWID; case DPI_SQLT_RSET: return DPI_ORACLE_TYPE_STMT; @@ -352,8 +355,10 @@ static dpiOracleTypeNum dpiOracleType__convertFromOracle(uint16_t typeCode, case DPI_SQLT_INTERVAL_YM: return DPI_ORACLE_TYPE_INTERVAL_YM; case DPI_SQLT_LNG: + case DPI_OCI_TYPECODE_LONG: return DPI_ORACLE_TYPE_LONG_VARCHAR; case DPI_SQLT_LBI: + case DPI_OCI_TYPECODE_LONG_RAW: return DPI_ORACLE_TYPE_LONG_RAW; } return (dpiOracleTypeNum) 0; @@ -497,4 +502,3 @@ int dpiOracleType__populateTypeInfo(dpiConn *conn, void *handle, return DPI_SUCCESS; } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiPool.c b/vendor/github.com/godror/godror/odpi/src/dpiPool.c similarity index 96% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiPool.c rename to vendor/github.com/godror/godror/odpi/src/dpiPool.c index 9d237767163e..58f21e5efc9f 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiPool.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiPool.c @@ -30,6 +30,7 @@ int dpiPool__acquireConnection(dpiPool *pool, const char *userName, if (dpiGen__allocate(DPI_HTYPE_CONN, pool->env, (void**) &tempConn, error) < 0) return DPI_FAILURE; + error->env = pool->env; // create the connection if (dpiConn__create(tempConn, pool->env->context, userName, userNameLength, @@ -52,7 +53,7 @@ int dpiPool__acquireConnection(dpiPool *pool, const char *userName, static int dpiPool__checkConnected(dpiPool *pool, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(pool, DPI_HTYPE_POOL, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(pool, DPI_HTYPE_POOL, fnName, error) < 0) return DPI_FAILURE; if (!pool->handle) return dpiError__set(error, "check pool", DPI_ERR_NOT_CONNECTED); @@ -158,6 +159,17 @@ static int dpiPool__create(dpiPool *pool, const char *userName, return DPI_FAILURE; } + // set the maximum number of sessions per shard (valid in 18.3 and higher) + if (pool->env->versionInfo->versionNum > 18 || + (pool->env->versionInfo->versionNum == 18 && + pool->env->versionInfo->releaseNum >= 3)) { + if (dpiOci__attrSet(pool->handle, DPI_OCI_HTYPE_SPOOL, (void*) + &createParams->maxSessionsPerShard, 0, + DPI_OCI_ATTR_SPOOL_MAX_PER_SHARD, + "set max sessions per shard", error) < 0) + return DPI_FAILURE; + } + // set reamining attributes directly pool->homogeneous = createParams->homogeneous; pool->externalAuth = createParams->externalAuth; @@ -359,7 +371,7 @@ int dpiPool_create(const dpiContext *context, const char *userName, dpiError error; // validate parameters - if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, 0, + if (dpiGen__startPublicFn(context, DPI_HTYPE_CONTEXT, __func__, &error) < 0) return dpiGen__endPublicFn(context, DPI_FAILURE, &error); DPI_CHECK_PTR_AND_LENGTH(context, userName) @@ -387,7 +399,7 @@ int dpiPool_create(const dpiContext *context, const char *userName, return dpiGen__endPublicFn(context, DPI_FAILURE, &error); // initialize environment - if (dpiEnv__init(tempPool->env, context, commonParams, &error) < 0) { + if (dpiEnv__init(tempPool->env, context, commonParams, NULL, &error) < 0) { dpiPool__free(tempPool, &error); return dpiGen__endPublicFn(context, DPI_FAILURE, &error); } @@ -403,8 +415,7 @@ int dpiPool_create(const dpiContext *context, const char *userName, createParams->outPoolName = tempPool->name; createParams->outPoolNameLength = tempPool->nameLength; *pool = tempPool; - dpiHandlePool__release(tempPool->env->errorHandles, error.handle, &error); - error.handle = NULL; + dpiHandlePool__release(tempPool->env->errorHandles, &error.handle); return dpiGen__endPublicFn(context, DPI_SUCCESS, &error); } @@ -573,4 +584,3 @@ int dpiPool_setWaitTimeout(dpiPool *pool, uint32_t value) return dpiPool__setAttributeUint(pool, DPI_OCI_ATTR_SPOOL_WAIT_TIMEOUT, value, __func__); } - diff --git a/vendor/github.com/godror/godror/odpi/src/dpiQueue.c b/vendor/github.com/godror/godror/odpi/src/dpiQueue.c new file mode 100644 index 000000000000..9c2f39a9a820 --- /dev/null +++ b/vendor/github.com/godror/godror/odpi/src/dpiQueue.c @@ -0,0 +1,560 @@ +//----------------------------------------------------------------------------- +// Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved. +// This program is free software: you can modify it and/or redistribute it +// under the terms of: +// +// (i) the Universal Permissive License v 1.0 or at your option, any +// later version (http://oss.oracle.com/licenses/upl); and/or +// +// (ii) the Apache License v 2.0. (http://www.apache.org/licenses/LICENSE-2.0) +//----------------------------------------------------------------------------- + +//----------------------------------------------------------------------------- +// dpiQueue.c +// Implementation of AQ queues. +//----------------------------------------------------------------------------- + +#include "dpiImpl.h" + +// forward declarations of internal functions only used in this file +static int dpiQueue__allocateBuffer(dpiQueue *queue, uint32_t numElements, + dpiError *error); +static int dpiQueue__deq(dpiQueue *queue, uint32_t *numProps, + dpiMsgProps **props, dpiError *error); +static void dpiQueue__freeBuffer(dpiQueue *queue, dpiError *error); +static int dpiQueue__getPayloadTDO(dpiQueue *queue, void **tdo, + dpiError *error); + + +//----------------------------------------------------------------------------- +// dpiQueue__allocate() [INTERNAL] +// Allocate and initialize a queue. +//----------------------------------------------------------------------------- +int dpiQueue__allocate(dpiConn *conn, const char *name, uint32_t nameLength, + dpiObjectType *payloadType, dpiQueue **queue, dpiError *error) +{ + dpiQueue *tempQueue; + char *buffer; + + // allocate handle; store reference to the connection that created it + if (dpiGen__allocate(DPI_HTYPE_QUEUE, conn->env, (void**) &tempQueue, + error) < 0) + return DPI_FAILURE; + dpiGen__setRefCount(conn, error, 1); + tempQueue->conn = conn; + + // store payload type, which is either an object type or NULL (meaning that + // RAW payloads are being enqueued and dequeued) + if (payloadType) { + dpiGen__setRefCount(payloadType, error, 1); + tempQueue->payloadType = payloadType; + } + + // allocate space for the name of the queue; OCI requires a NULL-terminated + // string so allocate enough space to store the NULL terminator; UTF-16 + // encoded strings are not currently supported + if (dpiUtils__allocateMemory(1, nameLength + 1, 0, "queue name", + (void**) &buffer, error) < 0) { + dpiQueue__free(tempQueue, error); + return DPI_FAILURE; + } + memcpy(buffer, name, nameLength); + buffer[nameLength] = '\0'; + tempQueue->name = buffer; + + *queue = tempQueue; + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue__allocateBuffer() [INTERNAL] +// Ensure there is enough space in the buffer for the specified number of +// elements. +//----------------------------------------------------------------------------- +static int dpiQueue__allocateBuffer(dpiQueue *queue, uint32_t numElements, + dpiError *error) +{ + dpiQueue__freeBuffer(queue, error); + queue->buffer.numElements = numElements; + if (dpiUtils__allocateMemory(numElements, sizeof(dpiMsgProps*), 1, + "allocate msg props array", (void**) &queue->buffer.props, + error) < 0) + return DPI_FAILURE; + if (dpiUtils__allocateMemory(numElements, sizeof(void*), 1, + "allocate OCI handles array", (void**) &queue->buffer.handles, + error) < 0) + return DPI_FAILURE; + if (dpiUtils__allocateMemory(numElements, sizeof(void*), 1, + "allocate OCI instances array", (void**) &queue->buffer.instances, + error) < 0) + return DPI_FAILURE; + if (dpiUtils__allocateMemory(numElements, sizeof(void*), 1, + "allocate OCI indicators array", + (void**) &queue->buffer.indicators, error) < 0) + return DPI_FAILURE; + if (!queue->payloadType) { + if (dpiUtils__allocateMemory(numElements, sizeof(int16_t), 1, + "allocate OCI raw indicators array", + (void**) &queue->buffer.rawIndicators, error) < 0) + return DPI_FAILURE; + } + if (dpiUtils__allocateMemory(numElements, sizeof(void*), 1, + "allocate message ids array", (void**) &queue->buffer.msgIds, + error) < 0) + return DPI_FAILURE; + + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue__check() [INTERNAL] +// Determine if the queue is available to use. +//----------------------------------------------------------------------------- +static int dpiQueue__check(dpiQueue *queue, const char *fnName, + dpiError *error) +{ + if (dpiGen__startPublicFn(queue, DPI_HTYPE_QUEUE, fnName, error) < 0) + return DPI_FAILURE; + if (!queue->conn->handle || queue->conn->closing) + return dpiError__set(error, "check connection", DPI_ERR_NOT_CONNECTED); + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue__createDeqOptions() [INTERNAL] +// Create the dequeue options object that will be used for performing +// dequeues against the queue. +//----------------------------------------------------------------------------- +static int dpiQueue__createDeqOptions(dpiQueue *queue, dpiError *error) +{ + dpiDeqOptions *tempOptions; + + if (dpiGen__allocate(DPI_HTYPE_DEQ_OPTIONS, queue->env, + (void**) &tempOptions, error) < 0) + return DPI_FAILURE; + if (dpiDeqOptions__create(tempOptions, queue->conn, error) < 0) { + dpiDeqOptions__free(tempOptions, error); + return DPI_FAILURE; + } + + queue->deqOptions = tempOptions; + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue__createEnqOptions() [INTERNAL] +// Create the dequeue options object that will be used for performing +// dequeues against the queue. +//----------------------------------------------------------------------------- +static int dpiQueue__createEnqOptions(dpiQueue *queue, dpiError *error) +{ + dpiEnqOptions *tempOptions; + + if (dpiGen__allocate(DPI_HTYPE_ENQ_OPTIONS, queue->env, + (void**) &tempOptions, error) < 0) + return DPI_FAILURE; + if (dpiEnqOptions__create(tempOptions, queue->conn, error) < 0) { + dpiEnqOptions__free(tempOptions, error); + return DPI_FAILURE; + } + + queue->enqOptions = tempOptions; + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue__deq() [INTERNAL] +// Perform a dequeue of up to the specified number of properties. +//----------------------------------------------------------------------------- +static int dpiQueue__deq(dpiQueue *queue, uint32_t *numProps, + dpiMsgProps **props, dpiError *error) +{ + dpiMsgProps *prop; + void *payloadTDO; + uint32_t i; + + // create dequeue options, if necessary + if (!queue->deqOptions && dpiQueue__createDeqOptions(queue, error) < 0) + return DPI_FAILURE; + + // allocate buffer, if necessary + if (queue->buffer.numElements < *numProps && + dpiQueue__allocateBuffer(queue, *numProps, error) < 0) + return DPI_FAILURE; + + // populate buffer + for (i = 0; i < *numProps; i++) { + prop = queue->buffer.props[i]; + + // create new message properties, if applicable + if (!prop) { + if (dpiMsgProps__allocate(queue->conn, &prop, error) < 0) + return DPI_FAILURE; + queue->buffer.props[i] = prop; + } + + // create payload object, if applicable + if (queue->payloadType && !prop->payloadObj && + dpiObject__allocate(queue->payloadType, NULL, NULL, NULL, + &prop->payloadObj, error) < 0) + return DPI_FAILURE; + + // set OCI arrays + queue->buffer.handles[i] = prop->handle; + if (queue->payloadType) { + queue->buffer.instances[i] = prop->payloadObj->instance; + queue->buffer.indicators[i] = prop->payloadObj->indicator; + } else { + queue->buffer.instances[i] = prop->payloadRaw; + queue->buffer.indicators[i] = &queue->buffer.rawIndicators[i]; + } + queue->buffer.msgIds[i] = prop->msgIdRaw; + + } + + // perform dequeue + if (dpiQueue__getPayloadTDO(queue, &payloadTDO, error) < 0) + return DPI_FAILURE; + if (dpiOci__aqDeqArray(queue->conn, queue->name, queue->deqOptions->handle, + numProps, queue->buffer.handles, payloadTDO, + queue->buffer.instances, queue->buffer.indicators, + queue->buffer.msgIds, error) < 0) { + if (error->buffer->code != 25228) + return DPI_FAILURE; + error->buffer->offset = (uint16_t) *numProps; + } + + // transfer message properties to destination array + for (i = 0; i < *numProps; i++) { + props[i] = queue->buffer.props[i]; + queue->buffer.props[i] = NULL; + if (!queue->payloadType) + props[i]->payloadRaw = queue->buffer.instances[i]; + props[i]->msgIdRaw = queue->buffer.msgIds[i]; + } + + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue__enq() [INTERNAL] +// Perform an enqueue of the specified properties. +//----------------------------------------------------------------------------- +static int dpiQueue__enq(dpiQueue *queue, uint32_t numProps, + dpiMsgProps **props, dpiError *error) +{ + void *payloadTDO; + uint32_t i; + + // if no messages are being enqueued, nothing to do! + if (numProps == 0) + return DPI_SUCCESS; + + // create enqueue options, if necessary + if (!queue->enqOptions && dpiQueue__createEnqOptions(queue, error) < 0) + return DPI_FAILURE; + + // allocate buffer, if necessary + if (queue->buffer.numElements < numProps && + dpiQueue__allocateBuffer(queue, numProps, error) < 0) + return DPI_FAILURE; + + // populate buffer + for (i = 0; i < numProps; i++) { + + // perform checks + if (!props[i]->payloadObj && !props[i]->payloadRaw) + return dpiError__set(error, "check payload", + DPI_ERR_QUEUE_NO_PAYLOAD); + if ((queue->payloadType && !props[i]->payloadObj) || + (!queue->payloadType && props[i]->payloadObj)) + return dpiError__set(error, "check payload", + DPI_ERR_QUEUE_WRONG_PAYLOAD_TYPE); + if (queue->payloadType && props[i]->payloadObj && + queue->payloadType->tdo != props[i]->payloadObj->type->tdo) + return dpiError__set(error, "check payload", + DPI_ERR_WRONG_TYPE, + props[i]->payloadObj->type->schemaLength, + props[i]->payloadObj->type->schema, + props[i]->payloadObj->type->nameLength, + props[i]->payloadObj->type->name, + queue->payloadType->schemaLength, + queue->payloadType->schema, + queue->payloadType->nameLength, + queue->payloadType->name); + + // set OCI arrays + queue->buffer.handles[i] = props[i]->handle; + if (queue->payloadType) { + queue->buffer.instances[i] = props[i]->payloadObj->instance; + queue->buffer.indicators[i] = props[i]->payloadObj->indicator; + } else { + queue->buffer.instances[i] = props[i]->payloadRaw; + queue->buffer.indicators[i] = &queue->buffer.rawIndicators[i]; + } + queue->buffer.msgIds[i] = props[i]->msgIdRaw; + + } + + // perform enqueue + if (dpiQueue__getPayloadTDO(queue, &payloadTDO, error) < 0) + return DPI_FAILURE; + if (numProps == 1) { + if (dpiOci__aqEnq(queue->conn, queue->name, queue->enqOptions->handle, + queue->buffer.handles[0], payloadTDO, queue->buffer.instances, + queue->buffer.indicators, queue->buffer.msgIds, error) < 0) + return DPI_FAILURE; + } else { + if (dpiOci__aqEnqArray(queue->conn, queue->name, + queue->enqOptions->handle, &numProps, queue->buffer.handles, + payloadTDO, queue->buffer.instances, queue->buffer.indicators, + queue->buffer.msgIds, error) < 0) { + error->buffer->offset = (uint16_t) numProps; + return DPI_FAILURE; + } + } + + // transfer message ids back to message properties + for (i = 0; i < numProps; i++) + props[i]->msgIdRaw = queue->buffer.msgIds[i]; + + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue__free() [INTERNAL] +// Free the memory for a queue. +//----------------------------------------------------------------------------- +void dpiQueue__free(dpiQueue *queue, dpiError *error) +{ + if (queue->conn) { + dpiGen__setRefCount(queue->conn, error, -1); + queue->conn = NULL; + } + if (queue->payloadType) { + dpiGen__setRefCount(queue->payloadType, error, -1); + queue->payloadType = NULL; + } + if (queue->name) { + dpiUtils__freeMemory((void*) queue->name); + queue->name = NULL; + } + if (queue->deqOptions) { + dpiGen__setRefCount(queue->deqOptions, error, -1); + queue->deqOptions = NULL; + } + if (queue->enqOptions) { + dpiGen__setRefCount(queue->enqOptions, error, -1); + queue->enqOptions = NULL; + } + dpiQueue__freeBuffer(queue, error); + dpiUtils__freeMemory(queue); +} + + +//----------------------------------------------------------------------------- +// dpiQueue__freeBuffer() [INTERNAL] +// Free the memory areas in the queue buffer. +//----------------------------------------------------------------------------- +static void dpiQueue__freeBuffer(dpiQueue *queue, dpiError *error) +{ + dpiQueueBuffer *buffer = &queue->buffer; + uint32_t i; + + if (buffer->props) { + for (i = 0; i < buffer->numElements; i++) { + if (buffer->props[i]) { + dpiGen__setRefCount(buffer->props[i], error, -1); + buffer->props[i] = NULL; + } + } + dpiUtils__freeMemory(buffer->props); + buffer->props = NULL; + } + if (buffer->handles) { + dpiUtils__freeMemory(buffer->handles); + buffer->handles = NULL; + } + if (buffer->instances) { + dpiUtils__freeMemory(buffer->instances); + buffer->instances = NULL; + } + if (buffer->indicators) { + dpiUtils__freeMemory(buffer->indicators); + buffer->indicators = NULL; + } + if (buffer->rawIndicators) { + dpiUtils__freeMemory(buffer->rawIndicators); + buffer->rawIndicators = NULL; + } + if (buffer->msgIds) { + dpiUtils__freeMemory(buffer->msgIds); + buffer->msgIds = NULL; + } +} + + +//----------------------------------------------------------------------------- +// dpiQueue__getPayloadTDO() [INTERNAL] +// Acquire the TDO to use for the payload. This will either be the TDO of the +// object type (if one was specified when the queue was created) or it will be +// the RAW TDO cached on the connection. +//----------------------------------------------------------------------------- +static int dpiQueue__getPayloadTDO(dpiQueue *queue, void **tdo, + dpiError *error) +{ + if (queue->payloadType) { + *tdo = queue->payloadType->tdo; + } else { + if (dpiConn__getRawTDO(queue->conn, error) < 0) + return DPI_FAILURE; + *tdo = queue->conn->rawTDO; + } + return DPI_SUCCESS; +} + + +//----------------------------------------------------------------------------- +// dpiQueue_addRef() [PUBLIC] +// Add a reference to the queue. +//----------------------------------------------------------------------------- +int dpiQueue_addRef(dpiQueue *queue) +{ + return dpiGen__addRef(queue, DPI_HTYPE_QUEUE, __func__); +} + + +//----------------------------------------------------------------------------- +// dpiQueue_deqMany() [PUBLIC] +// Dequeue multiple messages from the queue. +//----------------------------------------------------------------------------- +int dpiQueue_deqMany(dpiQueue *queue, uint32_t *numProps, dpiMsgProps **props) +{ + dpiError error; + int status; + + if (dpiQueue__check(queue, __func__, &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + DPI_CHECK_PTR_NOT_NULL(queue, numProps) + DPI_CHECK_PTR_NOT_NULL(queue, props) + status = dpiQueue__deq(queue, numProps, props, &error); + return dpiGen__endPublicFn(queue, status, &error); +} + + +//----------------------------------------------------------------------------- +// dpiQueue_deqOne() [PUBLIC] +// Dequeue a single message from the queue. +//----------------------------------------------------------------------------- +int dpiQueue_deqOne(dpiQueue *queue, dpiMsgProps **props) +{ + uint32_t numProps = 1; + dpiError error; + + if (dpiQueue__check(queue, __func__, &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + DPI_CHECK_PTR_NOT_NULL(queue, props) + if (dpiQueue__deq(queue, &numProps, props, &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + if (numProps == 0) + *props = NULL; + return dpiGen__endPublicFn(queue, DPI_SUCCESS, &error); +} + + +//----------------------------------------------------------------------------- +// dpiQueue_enqMany() [PUBLIC] +// Enqueue multiple message to the queue. +//----------------------------------------------------------------------------- +int dpiQueue_enqMany(dpiQueue *queue, uint32_t numProps, dpiMsgProps **props) +{ + dpiError error; + uint32_t i; + int status; + + // validate parameters + if (dpiQueue__check(queue, __func__, &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + DPI_CHECK_PTR_NOT_NULL(queue, props) + for (i = 0; i < numProps; i++) { + if (dpiGen__checkHandle(props[i], DPI_HTYPE_MSG_PROPS, + "check message properties", &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + } + status = dpiQueue__enq(queue, numProps, props, &error); + return dpiGen__endPublicFn(queue, status, &error); +} + + +//----------------------------------------------------------------------------- +// dpiQueue_enqOne() [PUBLIC] +// Enqueue a single message to the queue. +//----------------------------------------------------------------------------- +int dpiQueue_enqOne(dpiQueue *queue, dpiMsgProps *props) +{ + dpiError error; + int status; + + if (dpiQueue__check(queue, __func__, &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + if (dpiGen__checkHandle(props, DPI_HTYPE_MSG_PROPS, + "check message properties", &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + status = dpiQueue__enq(queue, 1, &props, &error); + return dpiGen__endPublicFn(queue, status, &error); +} + + +//----------------------------------------------------------------------------- +// dpiQueue_getDeqOptions() [PUBLIC] +// Return the dequeue options associated with the queue. If no dequeue +// options are currently associated with the queue, create them first. +//----------------------------------------------------------------------------- +int dpiQueue_getDeqOptions(dpiQueue *queue, dpiDeqOptions **options) +{ + dpiError error; + + if (dpiGen__startPublicFn(queue, DPI_HTYPE_QUEUE, __func__, &error) < 0) + return DPI_FAILURE; + DPI_CHECK_PTR_NOT_NULL(queue, options) + if (!queue->deqOptions && dpiQueue__createDeqOptions(queue, &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + *options = queue->deqOptions; + return dpiGen__endPublicFn(queue, DPI_SUCCESS, &error); +} + + +//----------------------------------------------------------------------------- +// dpiQueue_getEnqOptions() [PUBLIC] +// Return the enqueue options associated with the queue. If no enqueue +// options are currently associated with the queue, create them first. +//----------------------------------------------------------------------------- +int dpiQueue_getEnqOptions(dpiQueue *queue, dpiEnqOptions **options) +{ + dpiError error; + + if (dpiGen__startPublicFn(queue, DPI_HTYPE_QUEUE, __func__, &error) < 0) + return DPI_FAILURE; + DPI_CHECK_PTR_NOT_NULL(queue, options) + if (!queue->enqOptions && dpiQueue__createEnqOptions(queue, &error) < 0) + return dpiGen__endPublicFn(queue, DPI_FAILURE, &error); + *options = queue->enqOptions; + return dpiGen__endPublicFn(queue, DPI_SUCCESS, &error); +} + + +//----------------------------------------------------------------------------- +// dpiQueue_release() [PUBLIC] +// Release a reference to the queue. +//----------------------------------------------------------------------------- +int dpiQueue_release(dpiQueue *queue) +{ + return dpiGen__release(queue, DPI_HTYPE_QUEUE, __func__); +} diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiRowid.c b/vendor/github.com/godror/godror/odpi/src/dpiRowid.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiRowid.c rename to vendor/github.com/godror/godror/odpi/src/dpiRowid.c index f3de644deba9..9bda49e7cd1f 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiRowid.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiRowid.c @@ -78,8 +78,7 @@ int dpiRowid_getStringValue(dpiRowid *rowid, const char **value, dpiError error; uint16_t i; - if (dpiGen__startPublicFn(rowid, DPI_HTYPE_ROWID, __func__, 1, - &error) < 0) + if (dpiGen__startPublicFn(rowid, DPI_HTYPE_ROWID, __func__, &error) < 0) return dpiGen__endPublicFn(rowid, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(rowid, value) DPI_CHECK_PTR_NOT_NULL(rowid, valueLength) @@ -133,4 +132,3 @@ int dpiRowid_release(dpiRowid *rowid) { return dpiGen__release(rowid, DPI_HTYPE_ROWID, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaColl.c b/vendor/github.com/godror/godror/odpi/src/dpiSodaColl.c similarity index 84% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaColl.c rename to vendor/github.com/godror/godror/odpi/src/dpiSodaColl.c index a874e4b3f7eb..b0a0ded7c34f 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaColl.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiSodaColl.c @@ -57,7 +57,7 @@ int dpiSodaColl__allocate(dpiSodaDb *db, void *handle, dpiSodaColl **coll, static int dpiSodaColl__check(dpiSodaColl *coll, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(coll, DPI_HTYPE_SODA_COLL, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(coll, DPI_HTYPE_SODA_COLL, fnName, error) < 0) return DPI_FAILURE; if (!coll->db->conn->handle || coll->db->conn->closing) return dpiError__set(error, "check connection", DPI_ERR_NOT_CONNECTED); @@ -255,6 +255,85 @@ static int dpiSodaColl__getDocCount(dpiSodaColl *coll, } +//----------------------------------------------------------------------------- +// dpiSodaColl__insertMany() [INTERNAL] +// Insert multiple documents into the collection and return handles to the +// newly created documents, if desired. +//----------------------------------------------------------------------------- +static int dpiSodaColl__insertMany(dpiSodaColl *coll, uint32_t numDocs, + void **docHandles, uint32_t flags, dpiSodaDoc **insertedDocs, + dpiError *error) +{ + void *optionsHandle; + uint32_t i, j, mode; + uint64_t docCount; + int status; + + // create OCI output options handle + if (dpiOci__handleAlloc(coll->env->handle, &optionsHandle, + DPI_OCI_HTYPE_SODA_OUTPUT_OPTIONS, + "allocate SODA output options handle", error) < 0) + return DPI_FAILURE; + + // determine mode to pass + mode = DPI_OCI_DEFAULT; + if (flags & DPI_SODA_FLAGS_ATOMIC_COMMIT) + mode |= DPI_OCI_SODA_ATOMIC_COMMIT; + + // perform actual bulk insert + if (insertedDocs) { + status = dpiOci__sodaBulkInsertAndGet(coll, docHandles, numDocs, + optionsHandle, mode, error); + } else { + status = dpiOci__sodaBulkInsert(coll, docHandles, numDocs, + optionsHandle, mode, error); + } + + // on failure, determine the number of documents that were successfully + // inserted and store that information in the error buffer + if (status < 0) { + dpiOci__attrGet(optionsHandle, DPI_OCI_HTYPE_SODA_OUTPUT_OPTIONS, + (void*) &docCount, 0, DPI_OCI_ATTR_SODA_DOC_COUNT, + NULL, error); + error->buffer->offset = (uint16_t) docCount; + } + dpiOci__handleFree(optionsHandle, DPI_OCI_HTYPE_SODA_OUTPUT_OPTIONS); + + // on failure, if using the "AndGet" variant, any document handles that + // were created need to be freed + if (insertedDocs && status < 0) { + for (i = 0; i < numDocs; i++) { + if (docHandles[i]) { + dpiOci__handleFree(docHandles[i], DPI_OCI_HTYPE_SODA_DOCUMENT); + docHandles[i] = NULL; + } + } + } + if (status < 0) + return DPI_FAILURE; + + // return document handles, if desired + if (insertedDocs) { + for (i = 0; i < numDocs; i++) { + if (dpiSodaDoc__allocate(coll->db, docHandles[i], &insertedDocs[i], + error) < 0) { + for (j = 0; j < i; j++) { + dpiSodaDoc__free(insertedDocs[j], error); + insertedDocs[j] = NULL; + } + for (j = i; j < numDocs; j++) { + dpiOci__handleFree(docHandles[i], + DPI_OCI_HTYPE_SODA_DOCUMENT); + } + return DPI_FAILURE; + } + } + } + + return DPI_SUCCESS; +} + + //----------------------------------------------------------------------------- // dpiSodaColl__remove() [INTERNAL] // Internal method for removing documents from a collection. @@ -584,6 +663,53 @@ int dpiSodaColl_getName(dpiSodaColl *coll, const char **value, } +//----------------------------------------------------------------------------- +// dpiSodaColl_insertMany() [PUBLIC] +// Insert multiple documents into the collection and return handles to the +// newly created documents, if desired. +//----------------------------------------------------------------------------- +int dpiSodaColl_insertMany(dpiSodaColl *coll, uint32_t numDocs, + dpiSodaDoc **docs, uint32_t flags, dpiSodaDoc **insertedDocs) +{ + void **docHandles; + dpiError error; + uint32_t i; + int status; + + // validate parameters + if (dpiSodaColl__check(coll, __func__, &error) < 0) + return dpiGen__endPublicFn(coll, DPI_FAILURE, &error); + DPI_CHECK_PTR_NOT_NULL(coll, docs) + if (numDocs == 0) { + dpiError__set(&error, "check num documents", DPI_ERR_ARRAY_SIZE_ZERO); + return dpiGen__endPublicFn(coll, DPI_FAILURE, &error); + } + for (i = 0; i < numDocs; i++) { + if (dpiGen__checkHandle(docs[i], DPI_HTYPE_SODA_DOC, "check document", + &error) < 0) + return dpiGen__endPublicFn(coll, DPI_FAILURE, &error); + } + + // bulk insert is only supported with Oracle Client 18.5+ + if (dpiUtils__checkClientVersion(coll->env->versionInfo, 18, 5, + &error) < 0) + return dpiGen__endPublicFn(coll, DPI_FAILURE, &error); + + // create and populate array to hold document handles + if (dpiUtils__allocateMemory(numDocs, sizeof(void*), 1, + "allocate document handles", (void**) &docHandles, &error) < 0) + return dpiGen__endPublicFn(coll, DPI_FAILURE, &error); + for (i = 0; i < numDocs; i++) + docHandles[i] = docs[i]->handle; + + // perform bulk insert + status = dpiSodaColl__insertMany(coll, numDocs, docHandles, flags, + insertedDocs, &error); + dpiUtils__freeMemory(docHandles); + return dpiGen__endPublicFn(coll, status, &error); +} + + //----------------------------------------------------------------------------- // dpiSodaColl_insertOne() [PUBLIC] // Insert a document into the collection and return a handle to the newly @@ -684,4 +810,3 @@ int dpiSodaColl_replaceOne(dpiSodaColl *coll, replacedDoc, &error); return dpiGen__endPublicFn(coll, status, &error); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaCollCursor.c b/vendor/github.com/godror/godror/odpi/src/dpiSodaCollCursor.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaCollCursor.c rename to vendor/github.com/godror/godror/odpi/src/dpiSodaCollCursor.c index 2356b7b9cf94..f4e91a427cb0 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaCollCursor.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiSodaCollCursor.c @@ -43,7 +43,7 @@ int dpiSodaCollCursor__allocate(dpiSodaDb *db, void *handle, static int dpiSodaCollCursor__check(dpiSodaCollCursor *cursor, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(cursor, DPI_HTYPE_SODA_COLL_CURSOR, fnName, 1, + if (dpiGen__startPublicFn(cursor, DPI_HTYPE_SODA_COLL_CURSOR, fnName, error) < 0) return DPI_FAILURE; if (!cursor->handle) @@ -142,4 +142,3 @@ int dpiSodaCollCursor_release(dpiSodaCollCursor *cursor) { return dpiGen__release(cursor, DPI_HTYPE_SODA_COLL_CURSOR, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDb.c b/vendor/github.com/godror/godror/odpi/src/dpiSodaDb.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDb.c rename to vendor/github.com/godror/godror/odpi/src/dpiSodaDb.c index f6c5cacaf30a..0e1605a35075 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDb.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiSodaDb.c @@ -23,7 +23,7 @@ static int dpiSodaDb__checkConnected(dpiSodaDb *db, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(db, DPI_HTYPE_SODA_DB, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(db, DPI_HTYPE_SODA_DB, fnName, error) < 0) return DPI_FAILURE; if (!db->conn->handle || db->conn->closing) return dpiError__set(error, "check connection", DPI_ERR_NOT_CONNECTED); @@ -201,7 +201,7 @@ int dpiSodaDb_createCollection(dpiSodaDb *db, const char *name, //----------------------------------------------------------------------------- int dpiSodaDb_createDocument(dpiSodaDb *db, const char *key, uint32_t keyLength, const char *content, uint32_t contentLength, - const char *mediaType, uint32_t mediaTypeLength, uint32_t flags, + const char *mediaType, uint32_t mediaTypeLength, UNUSED uint32_t flags, dpiSodaDoc **doc) { int detectEncoding; @@ -429,4 +429,3 @@ int dpiSodaDb_release(dpiSodaDb *db) { return dpiGen__release(db, DPI_HTYPE_SODA_DB, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDoc.c b/vendor/github.com/godror/godror/odpi/src/dpiSodaDoc.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDoc.c rename to vendor/github.com/godror/godror/odpi/src/dpiSodaDoc.c index 1b144ee7a6a9..b009e33a44a2 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDoc.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiSodaDoc.c @@ -43,7 +43,7 @@ int dpiSodaDoc__allocate(dpiSodaDb *db, void *handle, dpiSodaDoc **doc, static int dpiSodaDoc__check(dpiSodaDoc *doc, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(doc, DPI_HTYPE_SODA_DOC, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(doc, DPI_HTYPE_SODA_DOC, fnName, error) < 0) return DPI_FAILURE; if (!doc->db->conn->handle || doc->db->conn->closing) return dpiError__set(error, "check connection", DPI_ERR_NOT_CONNECTED); @@ -229,4 +229,3 @@ int dpiSodaDoc_release(dpiSodaDoc *doc) { return dpiGen__release(doc, DPI_HTYPE_SODA_DOC, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDocCursor.c b/vendor/github.com/godror/godror/odpi/src/dpiSodaDocCursor.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDocCursor.c rename to vendor/github.com/godror/godror/odpi/src/dpiSodaDocCursor.c index a7d7999b43b3..9bfd2bdbeea2 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSodaDocCursor.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiSodaDocCursor.c @@ -43,7 +43,7 @@ int dpiSodaDocCursor__allocate(dpiSodaColl *coll, void *handle, static int dpiSodaDocCursor__check(dpiSodaDocCursor *cursor, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(cursor, DPI_HTYPE_SODA_DOC_CURSOR, fnName, 1, + if (dpiGen__startPublicFn(cursor, DPI_HTYPE_SODA_DOC_CURSOR, fnName, error) < 0) return DPI_FAILURE; if (!cursor->handle) @@ -142,4 +142,3 @@ int dpiSodaDocCursor_release(dpiSodaDocCursor *cursor) { return dpiGen__release(cursor, DPI_HTYPE_SODA_DOC_CURSOR, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiStmt.c b/vendor/github.com/godror/godror/odpi/src/dpiStmt.c similarity index 95% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiStmt.c rename to vendor/github.com/godror/godror/odpi/src/dpiStmt.c index 6cfd01fa1879..cd3520b93b0a 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiStmt.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiStmt.c @@ -212,9 +212,9 @@ static int dpiStmt__bind(dpiStmt *stmt, dpiVar *var, int addReference, //----------------------------------------------------------------------------- static int dpiStmt__check(dpiStmt *stmt, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(stmt, DPI_HTYPE_STMT, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(stmt, DPI_HTYPE_STMT, fnName, error) < 0) return DPI_FAILURE; - if (!stmt->handle) + if (!stmt->handle || (stmt->parentStmt && !stmt->parentStmt->handle)) return dpiError__set(error, "check closed", DPI_ERR_STMT_CLOSED); if (dpiConn__checkConnected(stmt->conn, error) < 0) return DPI_FAILURE; @@ -321,14 +321,19 @@ int dpiStmt__close(dpiStmt *stmt, const char *tag, uint32_t tagLength, dpiStmt__clearBatchErrors(stmt); dpiStmt__clearBindVars(stmt, error); dpiStmt__clearQueryVars(stmt, error); + if (stmt->lastRowid) + dpiGen__setRefCount(stmt->lastRowid, error, -1); if (stmt->handle) { - if (!stmt->conn->deadSession && stmt->conn->handle) { + if (stmt->parentStmt) { + dpiGen__setRefCount(stmt->parentStmt, error, -1); + stmt->parentStmt = NULL; + } else if (!stmt->conn->deadSession && stmt->conn->handle) { if (stmt->isOwned) dpiOci__handleFree(stmt->handle, DPI_OCI_HTYPE_STMT); else status = dpiOci__stmtRelease(stmt, tag, tagLength, propagateErrors, error); } - if (!stmt->conn->closing) + if (!stmt->conn->closing && !stmt->parentStmt) dpiHandleList__removeHandle(stmt->conn->openStmts, stmt->openSlotNum); stmt->handle = NULL; @@ -479,6 +484,7 @@ static int dpiStmt__define(dpiStmt *stmt, uint32_t pos, dpiVar *var, { void *defineHandle = NULL; dpiQueryInfo *queryInfo; + int tempBool; // no need to perform define if variable is unchanged if (stmt->queryVars[pos - 1] == var) @@ -513,6 +519,15 @@ static int dpiStmt__define(dpiStmt *stmt, uint32_t pos, dpiVar *var, return DPI_FAILURE; } + // specify that the LOB length should be prefetched + if (var->nativeTypeNum == DPI_NATIVE_TYPE_LOB) { + tempBool = 1; + if (dpiOci__attrSet(defineHandle, DPI_OCI_HTYPE_DEFINE, + (void*) &tempBool, 0, DPI_OCI_ATTR_LOBPREFETCH_LENGTH, + "set lob prefetch length", error) < 0) + return DPI_FAILURE; + } + // define objects, if applicable if (var->buffer.objectIndicator && dpiOci__defineObject(var, defineHandle, error) < 0) @@ -677,6 +692,10 @@ static int dpiStmt__fetch(dpiStmt *stmt, dpiError *error) void dpiStmt__free(dpiStmt *stmt, dpiError *error) { dpiStmt__close(stmt, NULL, 0, 0, error); + if (stmt->parentStmt) { + dpiGen__setRefCount(stmt->parentStmt, error, -1); + stmt->parentStmt = NULL; + } if (stmt->conn) { dpiGen__setRefCount(stmt->conn, error, -1); stmt->conn = NULL; @@ -753,7 +772,7 @@ static int dpiStmt__getBatchErrors(dpiStmt *stmt, dpiError *error) // get error message localError.buffer = &stmt->batchErrors[i]; localError.handle = batchErrorHandle; - dpiError__check(&localError, DPI_OCI_ERROR, stmt->conn, + dpiError__setFromOCI(&localError, DPI_OCI_ERROR, stmt->conn, "get batch error"); if (error->buffer->errorNum) { overallStatus = DPI_FAILURE; @@ -773,6 +792,42 @@ static int dpiStmt__getBatchErrors(dpiStmt *stmt, dpiError *error) } +//----------------------------------------------------------------------------- +// dpiStmt__getRowCount() [INTERNAL] +// Return the number of rows affected by the last DML executed (for insert, +// update, delete and merge) or the number of rows fetched (for queries). In +// all other cases, 0 is returned. +//----------------------------------------------------------------------------- +static int dpiStmt__getRowCount(dpiStmt *stmt, uint64_t *count, + dpiError *error) +{ + uint32_t rowCount32; + + if (stmt->statementType == DPI_STMT_TYPE_SELECT) + *count = stmt->rowCount; + else if (stmt->statementType != DPI_STMT_TYPE_INSERT && + stmt->statementType != DPI_STMT_TYPE_UPDATE && + stmt->statementType != DPI_STMT_TYPE_DELETE && + stmt->statementType != DPI_STMT_TYPE_MERGE && + stmt->statementType != DPI_STMT_TYPE_CALL && + stmt->statementType != DPI_STMT_TYPE_BEGIN && + stmt->statementType != DPI_STMT_TYPE_DECLARE) { + *count = 0; + } else if (stmt->env->versionInfo->versionNum < 12) { + if (dpiOci__attrGet(stmt->handle, DPI_OCI_HTYPE_STMT, &rowCount32, 0, + DPI_OCI_ATTR_ROW_COUNT, "get row count", error) < 0) + return DPI_FAILURE; + *count = rowCount32; + } else { + if (dpiOci__attrGet(stmt->handle, DPI_OCI_HTYPE_STMT, count, 0, + DPI_OCI_ATTR_UB8_ROW_COUNT, "get row count", error) < 0) + return DPI_FAILURE; + } + + return DPI_SUCCESS; +} + + //----------------------------------------------------------------------------- // dpiStmt__getQueryInfo() [INTERNAL] // Get query information for the position in question. @@ -1484,6 +1539,8 @@ int dpiStmt_getImplicitResult(dpiStmt *stmt, dpiStmt **implicitResult) if (dpiStmt__allocate(stmt->conn, 0, &tempStmt, &error) < 0) return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); tempStmt->handle = handle; + dpiGen__setRefCount(stmt, &error, 1); + tempStmt->parentStmt = stmt; if (dpiStmt__createQueryVars(tempStmt, &error) < 0) { dpiStmt__free(tempStmt, &error); return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); @@ -1522,6 +1579,46 @@ int dpiStmt_getInfo(dpiStmt *stmt, dpiStmtInfo *info) } +//----------------------------------------------------------------------------- +// dpiStmt_getLastRowid() [PUBLIC] +// Returns the rowid of the last row that was affected by a DML statement. If +// no rows were affected by the last statement executed or the last statement +// executed was not a DML statement, NULL is returned. +//----------------------------------------------------------------------------- +int dpiStmt_getLastRowid(dpiStmt *stmt, dpiRowid **rowid) +{ + uint64_t rowCount; + dpiError error; + + if (dpiStmt__check(stmt, __func__, &error) < 0) + return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); + DPI_CHECK_PTR_NOT_NULL(stmt, rowid) + *rowid = NULL; + if (stmt->statementType == DPI_STMT_TYPE_INSERT || + stmt->statementType == DPI_STMT_TYPE_UPDATE || + stmt->statementType == DPI_STMT_TYPE_DELETE || + stmt->statementType == DPI_STMT_TYPE_MERGE) { + if (dpiStmt__getRowCount(stmt, &rowCount, &error) < 0) + return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); + if (rowCount > 0) { + if (stmt->lastRowid) { + dpiGen__setRefCount(stmt->lastRowid, &error, -1); + stmt->lastRowid = NULL; + } + if (dpiRowid__allocate(stmt->conn, &stmt->lastRowid, &error) < 0) + return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); + if (dpiOci__attrGet(stmt->handle, DPI_OCI_HTYPE_STMT, + stmt->lastRowid->handle, 0, DPI_OCI_ATTR_ROWID, + "get last rowid", &error) < 0) + return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); + *rowid = stmt->lastRowid; + } + } + + return dpiGen__endPublicFn(stmt, DPI_SUCCESS, &error); +} + + //----------------------------------------------------------------------------- // dpiStmt_getNumQueryColumns() [PUBLIC] // Returns the number of query columns associated with a statement. If the @@ -1612,33 +1709,14 @@ int dpiStmt_getQueryValue(dpiStmt *stmt, uint32_t pos, //----------------------------------------------------------------------------- int dpiStmt_getRowCount(dpiStmt *stmt, uint64_t *count) { - uint32_t rowCount32; dpiError error; + int status; if (dpiStmt__check(stmt, __func__, &error) < 0) return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(stmt, count) - if (stmt->statementType == DPI_STMT_TYPE_SELECT) - *count = stmt->rowCount; - else if (stmt->statementType != DPI_STMT_TYPE_INSERT && - stmt->statementType != DPI_STMT_TYPE_UPDATE && - stmt->statementType != DPI_STMT_TYPE_DELETE && - stmt->statementType != DPI_STMT_TYPE_MERGE && - stmt->statementType != DPI_STMT_TYPE_CALL && - stmt->statementType != DPI_STMT_TYPE_BEGIN && - stmt->statementType != DPI_STMT_TYPE_DECLARE) { - *count = 0; - } else if (stmt->env->versionInfo->versionNum < 12) { - if (dpiOci__attrGet(stmt->handle, DPI_OCI_HTYPE_STMT, &rowCount32, 0, - DPI_OCI_ATTR_ROW_COUNT, "get row count", &error) < 0) - return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); - *count = rowCount32; - } else { - if (dpiOci__attrGet(stmt->handle, DPI_OCI_HTYPE_STMT, count, 0, - DPI_OCI_ATTR_UB8_ROW_COUNT, "get row count", &error) < 0) - return dpiGen__endPublicFn(stmt, DPI_FAILURE, &error); - } - return dpiGen__endPublicFn(stmt, DPI_SUCCESS, &error); + status = dpiStmt__getRowCount(stmt, count, &error); + return dpiGen__endPublicFn(stmt, status, &error); } @@ -1818,4 +1896,3 @@ int dpiStmt_setFetchArraySize(dpiStmt *stmt, uint32_t arraySize) stmt->fetchArraySize = arraySize; return dpiGen__endPublicFn(stmt, DPI_SUCCESS, &error); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSubscr.c b/vendor/github.com/godror/godror/odpi/src/dpiSubscr.c similarity index 93% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiSubscr.c rename to vendor/github.com/godror/godror/odpi/src/dpiSubscr.c index 55c00168e0d9..cf1ed60b2d8c 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiSubscr.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiSubscr.c @@ -40,9 +40,19 @@ static void dpiSubscr__callback(dpiSubscr *subscr, UNUSED void *handle, dpiError error; // ensure that the subscription handle is still valid - if (dpiGen__startPublicFn(subscr, DPI_HTYPE_SUBSCR, __func__, 1, - &error) < 0) + if (dpiGen__startPublicFn(subscr, DPI_HTYPE_SUBSCR, __func__, + &error) < 0) { dpiGen__endPublicFn(subscr, DPI_FAILURE, &error); + return; + } + + // if the subscription is no longer registered, nothing further to do + dpiMutex__acquire(subscr->mutex); + if (!subscr->registered) { + dpiMutex__release(subscr->mutex); + dpiGen__endPublicFn(subscr, DPI_SUCCESS, &error); + return; + } // populate message memset(&message, 0, sizeof(message)); @@ -52,11 +62,13 @@ static void dpiSubscr__callback(dpiSubscr *subscr, UNUSED void *handle, } message.registered = subscr->registered; - // invoke user callback + // invoke user callback; temporarily increase reference count to ensure + // that the subscription is not freed during the callback + dpiGen__setRefCount(subscr, &error, 1); (*subscr->callback)(subscr->callbackContext, &message); - - // clean up message dpiSubscr__freeMessage(&message); + dpiMutex__release(subscr->mutex); + dpiGen__setRefCount(subscr, &error, -1); dpiGen__endPublicFn(subscr, DPI_SUCCESS, &error); } @@ -68,7 +80,7 @@ static void dpiSubscr__callback(dpiSubscr *subscr, UNUSED void *handle, static int dpiSubscr__check(dpiSubscr *subscr, const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(subscr, DPI_HTYPE_SUBSCR, fnName, 1, error) < 0) + if (dpiGen__startPublicFn(subscr, DPI_HTYPE_SUBSCR, fnName, error) < 0) return DPI_FAILURE; if (!subscr->handle) return dpiError__set(error, "check closed", DPI_ERR_SUBSCR_CLOSED); @@ -84,7 +96,7 @@ static int dpiSubscr__check(dpiSubscr *subscr, const char *fnName, int dpiSubscr__create(dpiSubscr *subscr, dpiConn *conn, dpiSubscrCreateParams *params, dpiError *error) { - uint32_t qosFlags; + uint32_t qosFlags, mode; int32_t int32Val; int rowids; @@ -95,6 +107,8 @@ int dpiSubscr__create(dpiSubscr *subscr, dpiConn *conn, subscr->callbackContext = params->callbackContext; subscr->subscrNamespace = params->subscrNamespace; subscr->qos = params->qos; + subscr->clientInitiated = params->clientInitiated; + dpiMutex__initialize(subscr->mutex); // create the subscription handle if (dpiOci__handleAlloc(conn->env->handle, &subscr->handle, @@ -222,11 +236,27 @@ int dpiSubscr__create(dpiSubscr *subscr, dpiConn *conn, } - // register the subscription - if (dpiOci__subscriptionRegister(conn, &subscr->handle, error) < 0) + // register the subscription; client initiated subscriptions are only valid + // with 19.4 client and database + mode = DPI_OCI_DEFAULT; + if (params->clientInitiated) { + if (dpiUtils__checkClientVersion(conn->env->versionInfo, 19, 4, + error) < 0) + return DPI_FAILURE; + if (dpiUtils__checkDatabaseVersion(conn, 19, 4, error) < 0) + return DPI_FAILURE; + mode = DPI_OCI_SECURE_NOTIFICATION; + } + if (dpiOci__subscriptionRegister(conn, &subscr->handle, mode, error) < 0) return DPI_FAILURE; subscr->registered = 1; + // acquire the registration id + if (dpiOci__attrGet(subscr->handle, DPI_OCI_HTYPE_SUBSCRIPTION, + ¶ms->outRegId, NULL, DPI_OCI_ATTR_SUBSCR_CQ_REGID, + "get registration id", error) < 0) + return DPI_FAILURE; + return DPI_SUCCESS; } @@ -237,6 +267,7 @@ int dpiSubscr__create(dpiSubscr *subscr, dpiConn *conn, //----------------------------------------------------------------------------- void dpiSubscr__free(dpiSubscr *subscr, dpiError *error) { + dpiMutex__acquire(subscr->mutex); if (subscr->handle) { if (subscr->registered) dpiOci__subscriptionUnRegister(subscr->conn, subscr, error); @@ -247,6 +278,8 @@ void dpiSubscr__free(dpiSubscr *subscr, dpiError *error) dpiGen__setRefCount(subscr->conn, error, -1); subscr->conn = NULL; } + dpiMutex__release(subscr->mutex); + dpiMutex__destroy(subscr->mutex); dpiUtils__freeMemory(subscr); } @@ -615,9 +648,12 @@ static int dpiSubscr__populateQueryChangeMessage(dpiSubscr *subscr, static int dpiSubscr__prepareStmt(dpiSubscr *subscr, dpiStmt *stmt, const char *sql, uint32_t sqlLength, dpiError *error) { - // prepare statement for execution + // prepare statement for execution; only SELECT statements are supported if (dpiStmt__prepare(stmt, sql, sqlLength, NULL, 0, error) < 0) return DPI_FAILURE; + if (stmt->statementType != DPI_STMT_TYPE_SELECT) + return dpiError__set(error, "subscr prepare statement", + DPI_ERR_NOT_SUPPORTED); // fetch array size is set to 1 in order to avoid over allocation since // the query is not really going to be used for fetching rows, just for @@ -675,4 +711,3 @@ int dpiSubscr_release(dpiSubscr *subscr) { return dpiGen__release(subscr, DPI_HTYPE_SUBSCR, __func__); } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiUtils.c b/vendor/github.com/godror/godror/odpi/src/dpiUtils.c similarity index 99% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiUtils.c rename to vendor/github.com/godror/godror/odpi/src/dpiUtils.c index 1aad9118400f..0ab09e544879 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiUtils.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiUtils.c @@ -251,7 +251,7 @@ int dpiUtils__parseNumberString(const char *value, uint32_t valueLength, return dpiError__set(error, "no digits in exponent", DPI_ERR_INVALID_NUMBER); exponentDigits[numExponentDigits] = '\0'; - exponent = (int16_t) strtol(exponentDigits, NULL, 0); + exponent = (int16_t) strtol(exponentDigits, NULL, 10); if (exponentIsNegative) exponent = -exponent; *decimalPointIndex += exponent; @@ -399,4 +399,3 @@ int dpiUtils__setAttributesFromCommonCreateParams(void *handle, return DPI_SUCCESS; } - diff --git a/vendor/gopkg.in/goracle.v2/odpi/src/dpiVar.c b/vendor/github.com/godror/godror/odpi/src/dpiVar.c similarity index 97% rename from vendor/gopkg.in/goracle.v2/odpi/src/dpiVar.c rename to vendor/github.com/godror/godror/odpi/src/dpiVar.c index de9db59bc482..d8cde787c641 100644 --- a/vendor/gopkg.in/goracle.v2/odpi/src/dpiVar.c +++ b/vendor/github.com/godror/godror/odpi/src/dpiVar.c @@ -226,10 +226,9 @@ static void dpiVar__assignCallbackBuffer(dpiVar *var, dpiVarBuffer *buffer, // Verifies that the array size has not been exceeded. //----------------------------------------------------------------------------- static int dpiVar__checkArraySize(dpiVar *var, uint32_t pos, - const char *fnName, int needErrorHandle, dpiError *error) + const char *fnName, dpiError *error) { - if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, fnName, needErrorHandle, - error) < 0) + if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, fnName, error) < 0) return DPI_FAILURE; if (pos >= var->buffer.maxArraySize) return dpiError__set(error, "check array size", @@ -629,7 +628,6 @@ int dpiVar__getValue(dpiVar *var, dpiVarBuffer *buffer, uint32_t pos, return DPI_SUCCESS; } - // check for a NULL value; for objects the indicator is elsewhere data = &buffer->externalData[pos]; if (!buffer->objectIndicator) @@ -638,8 +636,17 @@ int dpiVar__getValue(dpiVar *var, dpiVarBuffer *buffer, uint32_t pos, data->isNull = (*((int16_t*) buffer->objectIndicator[pos]) == DPI_OCI_IND_NULL); else data->isNull = 1; - if (data->isNull) + if (data->isNull) { + if (inFetch && var->objectType && var->objectType->isCollection) { + if (dpiOci__objectFree(var->env->handle, + buffer->data.asObject[pos], 1, error) < 0) + return DPI_FAILURE; + if (dpiOci__objectFree(var->env->handle, + buffer->objectIndicator[pos], 1, error) < 0) + return DPI_FAILURE; + } return DPI_SUCCESS; + } // check return code for variable length data if (buffer->returnCode) { @@ -781,8 +788,8 @@ int dpiVar__getValue(dpiVar *var, dpiVarBuffer *buffer, uint32_t pos, // does nothing useful except satisfy OCI requirements. //----------------------------------------------------------------------------- int32_t dpiVar__inBindCallback(dpiVar *var, UNUSED void *bindp, - UNUSED uint32_t iter, uint32_t index, void **bufpp, uint32_t *alenp, - uint8_t *piecep, void **indpp) + UNUSED uint32_t iter, UNUSED uint32_t index, void **bufpp, + uint32_t *alenp, uint8_t *piecep, void **indpp) { dpiDynamicBytes *dynBytes; @@ -1208,7 +1215,8 @@ static int dpiVar__setFromBytes(dpiVar *var, uint32_t pos, const char *value, dynBytes = &var->buffer.dynamicBytes[pos]; if (dpiVar__allocateDynamicBytes(dynBytes, valueLength, error) < 0) return DPI_FAILURE; - memcpy(dynBytes->chunks->ptr, value, valueLength); + if (valueLength > 0) + memcpy(dynBytes->chunks->ptr, value, valueLength); dynBytes->numChunks = 1; dynBytes->chunks->length = valueLength; bytes->ptr = dynBytes->chunks->ptr; @@ -1461,6 +1469,10 @@ int dpiVar__setValue(dpiVar *var, dpiVarBuffer *buffer, uint32_t pos, case DPI_ORACLE_TYPE_NUMBER: return dpiDataBuffer__toOracleNumberFromDouble( &data->value, error, &buffer->data.asNumber[pos]); + case DPI_ORACLE_TYPE_DATE: + return dpiDataBuffer__toOracleDateFromDouble( + &data->value, var->env, error, + &buffer->data.asDate[pos]); case DPI_ORACLE_TYPE_TIMESTAMP: case DPI_ORACLE_TYPE_TIMESTAMP_TZ: case DPI_ORACLE_TYPE_TIMESTAMP_LTZ: @@ -1520,6 +1532,7 @@ static int dpiVar__validateTypes(const dpiOracleType *oracleType, dpiNativeTypeNum nativeTypeNum, dpiError *error) { switch (oracleType->oracleTypeNum) { + case DPI_ORACLE_TYPE_DATE: case DPI_ORACLE_TYPE_TIMESTAMP: case DPI_ORACLE_TYPE_TIMESTAMP_TZ: case DPI_ORACLE_TYPE_TIMESTAMP_LTZ: @@ -1564,7 +1577,7 @@ int dpiVar_copyData(dpiVar *var, uint32_t pos, dpiVar *sourceVar, dpiError error; int status; - if (dpiVar__checkArraySize(var, pos, __func__, 1, &error) < 0) + if (dpiVar__checkArraySize(var, pos, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); if (dpiGen__checkHandle(sourceVar, DPI_HTYPE_VAR, "check source var", &error) < 0) @@ -1594,7 +1607,7 @@ int dpiVar_getNumElementsInArray(dpiVar *var, uint32_t *numElements) { dpiError error; - if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, __func__, 0, &error) < 0) + if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(var, numElements) if (var->dynBindBuffers) @@ -1618,7 +1631,7 @@ int dpiVar_getReturnedData(dpiVar *var, uint32_t pos, uint32_t *numElements, { dpiError error; - if (dpiVar__checkArraySize(var, pos, __func__, 1, &error) < 0) + if (dpiVar__checkArraySize(var, pos, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(var, numElements) DPI_CHECK_PTR_NOT_NULL(var, data) @@ -1642,7 +1655,7 @@ int dpiVar_getSizeInBytes(dpiVar *var, uint32_t *sizeInBytes) { dpiError error; - if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, __func__, 0, &error) < 0) + if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); DPI_CHECK_PTR_NOT_NULL(var, sizeInBytes) *sizeInBytes = var->sizeInBytes; @@ -1673,9 +1686,9 @@ int dpiVar_setFromBytes(dpiVar *var, uint32_t pos, const char *value, dpiError error; int status; - if (dpiVar__checkArraySize(var, pos, __func__, 1, &error) < 0) + if (dpiVar__checkArraySize(var, pos, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); - DPI_CHECK_PTR_NOT_NULL(var, value) + DPI_CHECK_PTR_AND_LENGTH(var, value) if (var->nativeTypeNum != DPI_NATIVE_TYPE_BYTES && var->nativeTypeNum != DPI_NATIVE_TYPE_LOB) { dpiError__set(&error, "native type", DPI_ERR_NOT_SUPPORTED); @@ -1702,7 +1715,7 @@ int dpiVar_setFromLob(dpiVar *var, uint32_t pos, dpiLob *lob) dpiError error; int status; - if (dpiVar__checkArraySize(var, pos, __func__, 1, &error) < 0) + if (dpiVar__checkArraySize(var, pos, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); if (var->nativeTypeNum != DPI_NATIVE_TYPE_LOB) { dpiError__set(&error, "native type", DPI_ERR_NOT_SUPPORTED); @@ -1724,7 +1737,7 @@ int dpiVar_setFromObject(dpiVar *var, uint32_t pos, dpiObject *obj) dpiError error; int status; - if (dpiVar__checkArraySize(var, pos, __func__, 1, &error) < 0) + if (dpiVar__checkArraySize(var, pos, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); if (var->nativeTypeNum != DPI_NATIVE_TYPE_OBJECT) { dpiError__set(&error, "native type", DPI_ERR_NOT_SUPPORTED); @@ -1746,7 +1759,7 @@ int dpiVar_setFromRowid(dpiVar *var, uint32_t pos, dpiRowid *rowid) dpiError error; int status; - if (dpiVar__checkArraySize(var, pos, __func__, 1, &error) < 0) + if (dpiVar__checkArraySize(var, pos, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); if (var->nativeTypeNum != DPI_NATIVE_TYPE_ROWID) { dpiError__set(&error, "native type", DPI_ERR_NOT_SUPPORTED); @@ -1768,7 +1781,7 @@ int dpiVar_setFromStmt(dpiVar *var, uint32_t pos, dpiStmt *stmt) dpiError error; int status; - if (dpiVar__checkArraySize(var, pos, __func__, 1, &error) < 0) + if (dpiVar__checkArraySize(var, pos, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); if (var->nativeTypeNum != DPI_NATIVE_TYPE_STMT) { dpiError__set(&error, "native type", DPI_ERR_NOT_SUPPORTED); @@ -1788,7 +1801,7 @@ int dpiVar_setNumElementsInArray(dpiVar *var, uint32_t numElements) { dpiError error; - if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, __func__, 0, &error) < 0) + if (dpiGen__startPublicFn(var, DPI_HTYPE_VAR, __func__, &error) < 0) return dpiGen__endPublicFn(var, DPI_FAILURE, &error); if (numElements > var->buffer.maxArraySize) { dpiError__set(&error, "check num elements", @@ -1798,4 +1811,3 @@ int dpiVar_setNumElementsInArray(dpiVar *var, uint32_t numElements) var->buffer.actualArraySize = numElements; return dpiGen__endPublicFn(var, DPI_SUCCESS, &error); } - diff --git a/vendor/gopkg.in/goracle.v2/orahlp.go b/vendor/github.com/godror/godror/orahlp.go similarity index 55% rename from vendor/gopkg.in/goracle.v2/orahlp.go rename to vendor/github.com/godror/godror/orahlp.go index 6775a4678daf..b9b5068f51f0 100644 --- a/vendor/gopkg.in/goracle.v2/orahlp.go +++ b/vendor/github.com/godror/godror/orahlp.go @@ -1,19 +1,9 @@ // Copyright 2017 Tamás Gulácsi // // -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -package goracle +package godror import ( "bufio" @@ -23,11 +13,211 @@ import ( "database/sql/driver" "fmt" "io" + "math" + "strconv" "sync" + "time" - "github.com/pkg/errors" + errors "golang.org/x/xerrors" ) +// Number as string +type Number string + +var ( + // Int64 for converting to-from int64. + Int64 = intType{} + // Float64 for converting to-from float64. + Float64 = floatType{} + // Num for converting to-from Number (string) + Num = numType{} +) + +type intType struct{} + +func (intType) String() string { return "Int64" } +func (intType) ConvertValue(v interface{}) (driver.Value, error) { + if Log != nil { + Log("ConvertValue", "Int64", "value", v) + } + switch x := v.(type) { + case int8: + return int64(x), nil + case int16: + return int64(x), nil + case int32: + return int64(x), nil + case int64: + return x, nil + case uint16: + return int64(x), nil + case uint32: + return int64(x), nil + case uint64: + return int64(x), nil + case float32: + if _, f := math.Modf(float64(x)); f != 0 { + return int64(x), errors.Errorf("non-zero fractional part: %f", f) + } + return int64(x), nil + case float64: + if _, f := math.Modf(x); f != 0 { + return int64(x), errors.Errorf("non-zero fractional part: %f", f) + } + return int64(x), nil + case string: + if x == "" { + return 0, nil + } + return strconv.ParseInt(x, 10, 64) + case Number: + if x == "" { + return 0, nil + } + return strconv.ParseInt(string(x), 10, 64) + default: + return nil, errors.Errorf("unknown type %T", v) + } +} + +type floatType struct{} + +func (floatType) String() string { return "Float64" } +func (floatType) ConvertValue(v interface{}) (driver.Value, error) { + if Log != nil { + Log("ConvertValue", "Float64", "value", v) + } + switch x := v.(type) { + case int8: + return float64(x), nil + case int16: + return float64(x), nil + case int32: + return float64(x), nil + case uint16: + return float64(x), nil + case uint32: + return float64(x), nil + case int64: + return float64(x), nil + case uint64: + return float64(x), nil + case float32: + return float64(x), nil + case float64: + return x, nil + case string: + if x == "" { + return 0, nil + } + return strconv.ParseFloat(x, 64) + case Number: + if x == "" { + return 0, nil + } + return strconv.ParseFloat(string(x), 64) + default: + return nil, errors.Errorf("unknown type %T", v) + } +} + +type numType struct{} + +func (numType) String() string { return "Num" } +func (numType) ConvertValue(v interface{}) (driver.Value, error) { + if Log != nil { + Log("ConvertValue", "Num", "value", v) + } + switch x := v.(type) { + case string: + if x == "" { + return 0, nil + } + return x, nil + case Number: + if x == "" { + return 0, nil + } + return string(x), nil + case int8, int16, int32, int64, uint16, uint32, uint64: + return fmt.Sprintf("%d", x), nil + case float32, float64: + return fmt.Sprintf("%f", x), nil + default: + return nil, errors.Errorf("unknown type %T", v) + } +} +func (n Number) String() string { return string(n) } + +// Value returns the Number as driver.Value +func (n Number) Value() (driver.Value, error) { + return string(n), nil +} + +// Scan into the Number from a driver.Value. +func (n *Number) Scan(v interface{}) error { + if v == nil { + *n = "" + return nil + } + switch x := v.(type) { + case string: + *n = Number(x) + case Number: + *n = x + case int8, int16, int32, int64, uint16, uint32, uint64: + *n = Number(fmt.Sprintf("%d", x)) + case float32, float64: + *n = Number(fmt.Sprintf("%f", x)) + default: + return errors.Errorf("unknown type %T", v) + } + return nil +} + +// MarshalText marshals a Number to text. +func (n Number) MarshalText() ([]byte, error) { return []byte(n), nil } + +// UnmarshalText parses text into a Number. +func (n *Number) UnmarshalText(p []byte) error { + var dotNum int + for i, c := range p { + if !(c == '-' && i == 0 || '0' <= c && c <= '9') { + if c == '.' { + dotNum++ + if dotNum == 1 { + continue + } + } + return errors.Errorf("unknown char %c in %q", c, p) + } + } + *n = Number(p) + return nil +} + +// MarshalJSON marshals a Number into a JSON string. +func (n Number) MarshalJSON() ([]byte, error) { + b, err := n.MarshalText() + b2 := make([]byte, 1, 1+len(b)+1) + b2[0] = '"' + b2 = append(b2, b...) + b2 = append(b2, '"') + return b2, err +} + +// UnmarshalJSON parses a JSON string into the Number. +func (n *Number) UnmarshalJSON(p []byte) error { + *n = Number("") + if len(p) == 0 { + return nil + } + if len(p) > 2 && p[0] == '"' && p[len(p)-1] == '"' { + p = p[1 : len(p)-1] + } + return n.UnmarshalText(p) +} + // QueryColumn is the described column. type QueryColumn struct { Name string @@ -52,10 +242,11 @@ type Querier interface { // This can help using unknown-at-compile-time, a.k.a. // dynamic queries. func DescribeQuery(ctx context.Context, db Execer, qry string) ([]QueryColumn, error) { - c, err := getConn(db) + c, err := getConn(ctx, db) if err != nil { return nil, err } + defer c.close(false) stmt, err := c.PrepareContext(ctx, qry) if err != nil { @@ -213,7 +404,10 @@ func MapToSlice(qry string, metParam func(string) interface{}) (string, []interf func EnableDbmsOutput(ctx context.Context, conn Execer) error { qry := "BEGIN DBMS_OUTPUT.enable(1000000); END;" _, err := conn.ExecContext(ctx, qry) - return errors.Wrap(err, qry) + if err != nil { + return errors.Errorf("%s: %w", qry, err) + } + return nil } // ReadDbmsOutput copies the DBMS_OUTPUT buffer into the given io.Writer. @@ -224,7 +418,7 @@ func ReadDbmsOutput(ctx context.Context, w io.Writer, conn preparer) error { const qry = `BEGIN DBMS_OUTPUT.get_lines(:1, :2); END;` stmt, err := conn.PrepareContext(ctx, qry) if err != nil { - return errors.Wrap(err, qry) + return errors.Errorf("%s: %w", qry, err) } lines := make([]string, maxNumLines) @@ -237,7 +431,7 @@ func ReadDbmsOutput(ctx context.Context, w io.Writer, conn preparer) error { numLines = int64(len(lines)) if _, err = stmt.ExecContext(ctx, params...); err != nil { _ = bw.Flush() - return errors.Wrap(err, qry) + return errors.Errorf("%s: %w", qry, err) } for i := 0; i < int(numLines); i++ { _, _ = bw.WriteString(lines[i]) @@ -253,8 +447,8 @@ func ReadDbmsOutput(ctx context.Context, w io.Writer, conn preparer) error { } // ClientVersion returns the VersionInfo from the DB. -func ClientVersion(ex Execer) (VersionInfo, error) { - c, err := getConn(ex) +func ClientVersion(ctx context.Context, ex Execer) (VersionInfo, error) { + c, err := getConn(ctx, ex) if err != nil { return VersionInfo{}, err } @@ -262,8 +456,8 @@ func ClientVersion(ex Execer) (VersionInfo, error) { } // ServerVersion returns the VersionInfo of the client. -func ServerVersion(ex Execer) (VersionInfo, error) { - c, err := getConn(ex) +func ServerVersion(ctx context.Context, ex Execer) (VersionInfo, error) { + c, err := getConn(ctx, ex) if err != nil { return VersionInfo{}, err } @@ -273,32 +467,37 @@ func ServerVersion(ex Execer) (VersionInfo, error) { // Conn is the interface for a connection, to be returned by DriverConn. type Conn interface { driver.Conn + driver.ConnBeginTx + driver.ConnPrepareContext driver.Pinger + Break() error - BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) - PrepareContext(ctx context.Context, query string) (driver.Stmt, error) Commit() error Rollback() error + ClientVersion() (VersionInfo, error) ServerVersion() (VersionInfo, error) GetObjectType(name string) (ObjectType, error) NewSubscription(string, func(Event)) (*Subscription, error) Startup(StartupMode) error Shutdown(ShutdownMode) error + NewData(baseType interface{}, SliceLen, BufSize int) ([]*Data, error) + + Timezone() *time.Location } -// DriverConn returns the *goracle.conn of the database/sql.Conn -func DriverConn(ex Execer) (Conn, error) { - return getConn(ex) +// DriverConn returns the *godror.conn of the database/sql.Conn +func DriverConn(ctx context.Context, ex Execer) (Conn, error) { + return getConn(ctx, ex) } var getConnMu sync.Mutex -func getConn(ex Execer) (*conn, error) { +func getConn(ctx context.Context, ex Execer) (*conn, error) { getConnMu.Lock() defer getConnMu.Unlock() var c interface{} - if _, err := ex.ExecContext(context.Background(), getConnection, sql.Out{Dest: &c}); err != nil { - return nil, errors.Wrap(err, "getConnection") + if _, err := ex.ExecContext(ctx, getConnection, sql.Out{Dest: &c}); err != nil { + return nil, errors.Errorf("getConnection: %w", err) } return c.(*conn), nil } @@ -307,3 +506,11 @@ func getConn(ex Execer) (*conn, error) { func WrapRows(ctx context.Context, q Querier, rset driver.Rows) (*sql.Rows, error) { return q.QueryContext(ctx, wrapResultset, rset) } + +func Timezone(ctx context.Context, ex Execer) (*time.Location, error) { + c, err := getConn(ctx, ex) + if err != nil { + return nil, err + } + return c.Timezone(), nil +} diff --git a/vendor/github.com/godror/godror/queue.go b/vendor/github.com/godror/godror/queue.go new file mode 100644 index 000000000000..70b4d018377c --- /dev/null +++ b/vendor/github.com/godror/godror/queue.go @@ -0,0 +1,639 @@ +// Copyright 2019 Tamás Gulácsi +// +// +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 + +package godror + +/* +#include +#include "dpiImpl.h" +*/ +import "C" +import ( + "context" + "sync" + "time" + "unsafe" + + errors "golang.org/x/xerrors" +) + +const MsgIDLength = 16 + +var zeroMsgID [MsgIDLength]byte + +// DefaultEnqOptions is the default set for NewQueue. +var DefaultEnqOptions = EnqOptions{ + Visibility: VisibleImmediate, + DeliveryMode: DeliverPersistent, +} + +// DefaultDeqOptions is the default set for NewQueue. +var DefaultDeqOptions = DeqOptions{ + Mode: DeqRemove, + DeliveryMode: DeliverPersistent, + Navigation: NavFirst, + Visibility: VisibleImmediate, + Wait: 30, +} + +// Queue represents an Oracle Advanced Queue. +type Queue struct { + PayloadObjectType ObjectType + props []*C.dpiMsgProps + name string + conn *conn + dpiQueue *C.dpiQueue + + mu sync.Mutex +} + +// NewQueue creates a new Queue. +// +// WARNING: the connection given to it must not be closed before the Queue is closed! +// So use an sql.Conn for it. +func NewQueue(ctx context.Context, execer Execer, name string, payloadObjectTypeName string) (*Queue, error) { + cx, err := DriverConn(ctx, execer) + if err != nil { + return nil, err + } + Q := Queue{conn: cx.(*conn), name: name} + + var payloadType *C.dpiObjectType + if payloadObjectTypeName != "" { + if Q.PayloadObjectType, err = Q.conn.GetObjectType(payloadObjectTypeName); err != nil { + return nil, err + } else { + payloadType = Q.PayloadObjectType.dpiObjectType + } + } + value := C.CString(name) + if C.dpiConn_newQueue(Q.conn.dpiConn, value, C.uint(len(name)), payloadType, &Q.dpiQueue) == C.DPI_FAILURE { + err = errors.Errorf("newQueue %q: %w", name, Q.conn.drv.getError()) + } + C.free(unsafe.Pointer(value)) + if err != nil { + cx.Close() + return nil, err + } + if err = Q.SetEnqOptions(DefaultEnqOptions); err != nil { + cx.Close() + Q.Close() + return nil, err + } + if err = Q.SetDeqOptions(DefaultDeqOptions); err != nil { + cx.Close() + Q.Close() + return nil, err + } + return &Q, nil +} + +// Close the queue. +func (Q *Queue) Close() error { + c, q := Q.conn, Q.dpiQueue + Q.conn, Q.dpiQueue = nil, nil + if q == nil { + return nil + } + if C.dpiQueue_release(q) == C.DPI_FAILURE { + return errors.Errorf("release: %w", c.getError()) + } + return nil +} + +// Name of the queue. +func (Q *Queue) Name() string { return Q.name } + +// EnqOptions returns the queue's enqueue options in effect. +func (Q *Queue) EnqOptions() (EnqOptions, error) { + var E EnqOptions + var opts *C.dpiEnqOptions + if C.dpiQueue_getEnqOptions(Q.dpiQueue, &opts) == C.DPI_FAILURE { + return E, errors.Errorf("getEnqOptions: %w", Q.conn.drv.getError()) + } + err := E.fromOra(Q.conn.drv, opts) + return E, err +} + +// DeqOptions returns the queue's dequeue options in effect. +func (Q *Queue) DeqOptions() (DeqOptions, error) { + var D DeqOptions + var opts *C.dpiDeqOptions + if C.dpiQueue_getDeqOptions(Q.dpiQueue, &opts) == C.DPI_FAILURE { + return D, errors.Errorf("getDeqOptions: %w", Q.conn.drv.getError()) + } + err := D.fromOra(Q.conn.drv, opts) + return D, err +} + +// Dequeues messages into the given slice. +// Returns the number of messages filled in the given slice. +func (Q *Queue) Dequeue(messages []Message) (int, error) { + Q.mu.Lock() + defer Q.mu.Unlock() + var props []*C.dpiMsgProps + if cap(Q.props) >= len(messages) { + props = Q.props[:len(messages)] + } else { + props = make([]*C.dpiMsgProps, len(messages)) + } + Q.props = props + + var ok C.int + num := C.uint(len(props)) + if num == 1 { + ok = C.dpiQueue_deqOne(Q.dpiQueue, &props[0]) + } else { + ok = C.dpiQueue_deqMany(Q.dpiQueue, &num, &props[0]) + } + if ok == C.DPI_FAILURE { + err := Q.conn.getError() + if code := err.(interface{ Code() int }).Code(); code == 3156 { + return 0, context.DeadlineExceeded + } + return 0, errors.Errorf("dequeue: %w", err) + } + var firstErr error + for i, p := range props[:int(num)] { + if err := messages[i].fromOra(Q.conn, p, &Q.PayloadObjectType); err != nil { + if firstErr == nil { + firstErr = err + } + } + C.dpiMsgProps_release(p) + } + return int(num), firstErr +} + +// Enqueue all the messages given. +// +// WARNING: calling this function in parallel on different connections acquired from the same pool may fail due to Oracle bug 29928074. Ensure that this function is not run in parallel, use standalone connections or connections from different pools, or make multiple calls to Queue.enqOne() instead. The function Queue.Dequeue() call is not affected. +func (Q *Queue) Enqueue(messages []Message) error { + Q.mu.Lock() + defer Q.mu.Unlock() + var props []*C.dpiMsgProps + if cap(Q.props) >= len(messages) { + props = Q.props[:len(messages)] + } else { + props = make([]*C.dpiMsgProps, len(messages)) + } + Q.props = props + defer func() { + for _, p := range props { + if p != nil { + C.dpiMsgProps_release(p) + } + } + }() + for i, m := range messages { + if C.dpiConn_newMsgProps(Q.conn.dpiConn, &props[i]) == C.DPI_FAILURE { + return errors.Errorf("newMsgProps: %w", Q.conn.getError()) + } + if err := m.toOra(Q.conn.drv, props[i]); err != nil { + return err + } + } + + var ok C.int + if len(messages) == 1 { + ok = C.dpiQueue_enqOne(Q.dpiQueue, props[0]) + } else { + ok = C.dpiQueue_enqMany(Q.dpiQueue, C.uint(len(props)), &props[0]) + } + if ok == C.DPI_FAILURE { + return errors.Errorf("enqueue %#v: %w", messages, Q.conn.getError()) + } + + return nil +} + +// Message is a message - either received or being sent. +type Message struct { + Correlation, ExceptionQ string + Enqueued time.Time + MsgID, OriginalMsgID [16]byte + Raw []byte + Delay, Expiration int32 + Priority, NumAttempts int32 + Object *Object + DeliveryMode DeliveryMode + State MessageState +} + +func (M *Message) toOra(d *drv, props *C.dpiMsgProps) error { + var firstErr error + OK := func(ok C.int, name string) { + if ok == C.DPI_SUCCESS { + return + } + if firstErr == nil { + firstErr = errors.Errorf("%s: %w", name, d.getError()) + } + } + if M.Correlation != "" { + value := C.CString(M.Correlation) + OK(C.dpiMsgProps_setCorrelation(props, value, C.uint(len(M.Correlation))), "setCorrelation") + C.free(unsafe.Pointer(value)) + } + + if M.Delay != 0 { + OK(C.dpiMsgProps_setDelay(props, C.int(M.Delay)), "setDelay") + } + + if M.ExceptionQ != "" { + value := C.CString(M.ExceptionQ) + OK(C.dpiMsgProps_setExceptionQ(props, value, C.uint(len(M.ExceptionQ))), "setExceptionQ") + C.free(unsafe.Pointer(value)) + } + + if M.Expiration != 0 { + OK(C.dpiMsgProps_setExpiration(props, C.int(M.Expiration)), "setExpiration") + } + + if M.OriginalMsgID != zeroMsgID { + OK(C.dpiMsgProps_setOriginalMsgId(props, (*C.char)(unsafe.Pointer(&M.OriginalMsgID[0])), MsgIDLength), "setMsgOriginalId") + } + + OK(C.dpiMsgProps_setPriority(props, C.int(M.Priority)), "setPriority") + + if M.Object == nil { + OK(C.dpiMsgProps_setPayloadBytes(props, (*C.char)(unsafe.Pointer(&M.Raw[0])), C.uint(len(M.Raw))), "setPayloadBytes") + } else { + OK(C.dpiMsgProps_setPayloadObject(props, M.Object.dpiObject), "setPayloadObject") + } + + return firstErr +} + +func (M *Message) fromOra(c *conn, props *C.dpiMsgProps, objType *ObjectType) error { + var firstErr error + OK := func(ok C.int, name string) bool { + if ok == C.DPI_SUCCESS { + return true + } + if firstErr == nil { + firstErr = errors.Errorf("%s: %w", name, c.getError()) + } + return false + } + M.NumAttempts = 0 + var cint C.int + if OK(C.dpiMsgProps_getNumAttempts(props, &cint), "getNumAttempts") { + M.NumAttempts = int32(cint) + } + var value *C.char + var length C.uint + M.Correlation = "" + if OK(C.dpiMsgProps_getCorrelation(props, &value, &length), "getCorrelation") { + M.Correlation = C.GoStringN(value, C.int(length)) + } + + M.Delay = 0 + if OK(C.dpiMsgProps_getDelay(props, &cint), "getDelay") { + M.Delay = int32(cint) + } + + M.DeliveryMode = DeliverPersistent + var mode C.dpiMessageDeliveryMode + if OK(C.dpiMsgProps_getDeliveryMode(props, &mode), "getDeliveryMode") { + M.DeliveryMode = DeliveryMode(mode) + } + + M.ExceptionQ = "" + if OK(C.dpiMsgProps_getExceptionQ(props, &value, &length), "getExceptionQ") { + M.ExceptionQ = C.GoStringN(value, C.int(length)) + } + + var ts C.dpiTimestamp + M.Enqueued = time.Time{} + if OK(C.dpiMsgProps_getEnqTime(props, &ts), "getEnqTime") { + tz := c.timeZone + if ts.tzHourOffset != 0 || ts.tzMinuteOffset != 0 { + tz = timeZoneFor(ts.tzHourOffset, ts.tzMinuteOffset) + } + if tz == nil { + tz = time.Local + } + M.Enqueued = time.Date( + int(ts.year), time.Month(ts.month), int(ts.day), + int(ts.hour), int(ts.minute), int(ts.second), int(ts.fsecond), + tz, + ) + } + + M.Expiration = 0 + if OK(C.dpiMsgProps_getExpiration(props, &cint), "getExpiration") { + M.Expiration = int32(cint) + } + + M.MsgID = zeroMsgID + if OK(C.dpiMsgProps_getMsgId(props, &value, &length), "getMsgId") { + n := C.int(length) + if n > MsgIDLength { + n = MsgIDLength + } + copy(M.MsgID[:], C.GoBytes(unsafe.Pointer(value), n)) + } + + M.OriginalMsgID = zeroMsgID + if OK(C.dpiMsgProps_getOriginalMsgId(props, &value, &length), "getMsgOriginalId") { + n := C.int(length) + if n > MsgIDLength { + n = MsgIDLength + } + copy(M.OriginalMsgID[:], C.GoBytes(unsafe.Pointer(value), n)) + } + + M.Priority = 0 + if OK(C.dpiMsgProps_getPriority(props, &cint), "getPriority") { + M.Priority = int32(cint) + } + + M.State = 0 + var state C.dpiMessageState + if OK(C.dpiMsgProps_getState(props, &state), "getState") { + M.State = MessageState(state) + } + + M.Raw = nil + M.Object = nil + var obj *C.dpiObject + if OK(C.dpiMsgProps_getPayload(props, &obj, &value, &length), "getPayload") { + if obj == nil { + M.Raw = C.GoBytes(unsafe.Pointer(value), C.int(length)) + } else { + if C.dpiObject_addRef(obj) == C.DPI_FAILURE { + return objType.getError() + } + M.Object = &Object{dpiObject: obj, ObjectType: *objType} + } + } + return nil +} + +// EnqOptions are the options used to enqueue a message. +type EnqOptions struct { + Transformation string + Visibility Visibility + DeliveryMode DeliveryMode +} + +func (E *EnqOptions) fromOra(d *drv, opts *C.dpiEnqOptions) error { + var firstErr error + OK := func(ok C.int, msg string) bool { + if ok == C.DPI_SUCCESS { + return true + } + if firstErr == nil { + firstErr = errors.Errorf("%s: %w", msg, d.getError()) + } + return false + } + + E.DeliveryMode = DeliverPersistent + + var value *C.char + var length C.uint + if OK(C.dpiEnqOptions_getTransformation(opts, &value, &length), "getTransformation") { + E.Transformation = C.GoStringN(value, C.int(length)) + } + + var vis C.dpiVisibility + if OK(C.dpiEnqOptions_getVisibility(opts, &vis), "getVisibility") { + E.Visibility = Visibility(vis) + } + + return firstErr +} + +func (E EnqOptions) toOra(d *drv, opts *C.dpiEnqOptions) error { + var firstErr error + OK := func(ok C.int, msg string) bool { + if ok == C.DPI_SUCCESS { + return true + } + if firstErr == nil { + firstErr = errors.Errorf("%s: %w", msg, d.getError()) + } + return false + } + + OK(C.dpiEnqOptions_setDeliveryMode(opts, C.dpiMessageDeliveryMode(E.DeliveryMode)), "setDeliveryMode") + cs := C.CString(E.Transformation) + OK(C.dpiEnqOptions_setTransformation(opts, cs, C.uint(len(E.Transformation))), "setTransformation") + C.free(unsafe.Pointer(cs)) + OK(C.dpiEnqOptions_setVisibility(opts, C.uint(E.Visibility)), "setVisibility") + return firstErr +} + +// SetEnqOptions sets all the enqueue options +func (Q *Queue) SetEnqOptions(E EnqOptions) error { + var opts *C.dpiEnqOptions + if C.dpiQueue_getEnqOptions(Q.dpiQueue, &opts) == C.DPI_FAILURE { + return errors.Errorf("getEnqOptions: %w", Q.conn.drv.getError()) + } + return E.toOra(Q.conn.drv, opts) +} + +// DeqOptions are the options used to dequeue a message. +type DeqOptions struct { + Condition, Consumer, Correlation string + MsgID, Transformation string + Mode DeqMode + DeliveryMode DeliveryMode + Navigation DeqNavigation + Visibility Visibility + Wait uint32 +} + +func (D *DeqOptions) fromOra(d *drv, opts *C.dpiDeqOptions) error { + var firstErr error + OK := func(ok C.int, msg string) bool { + if ok == C.DPI_SUCCESS { + return true + } + if firstErr == nil { + firstErr = errors.Errorf("%s: %w", msg, d.getError()) + } + return false + } + + var value *C.char + var length C.uint + D.Transformation = "" + if OK(C.dpiDeqOptions_getTransformation(opts, &value, &length), "getTransformation") { + D.Transformation = C.GoStringN(value, C.int(length)) + } + D.Condition = "" + if OK(C.dpiDeqOptions_getCondition(opts, &value, &length), "getCondifion") { + D.Condition = C.GoStringN(value, C.int(length)) + } + D.Consumer = "" + if OK(C.dpiDeqOptions_getConsumerName(opts, &value, &length), "getConsumer") { + D.Consumer = C.GoStringN(value, C.int(length)) + } + D.Correlation = "" + if OK(C.dpiDeqOptions_getCorrelation(opts, &value, &length), "getCorrelation") { + D.Correlation = C.GoStringN(value, C.int(length)) + } + D.DeliveryMode = DeliverPersistent + var mode C.dpiDeqMode + if OK(C.dpiDeqOptions_getMode(opts, &mode), "getMode") { + D.Mode = DeqMode(mode) + } + D.MsgID = "" + if OK(C.dpiDeqOptions_getMsgId(opts, &value, &length), "getMsgId") { + D.MsgID = C.GoStringN(value, C.int(length)) + } + var nav C.dpiDeqNavigation + if OK(C.dpiDeqOptions_getNavigation(opts, &nav), "getNavigation") { + D.Navigation = DeqNavigation(nav) + } + var vis C.dpiVisibility + if OK(C.dpiDeqOptions_getVisibility(opts, &vis), "getVisibility") { + D.Visibility = Visibility(vis) + } + D.Wait = 0 + var u32 C.uint + if OK(C.dpiDeqOptions_getWait(opts, &u32), "getWait") { + D.Wait = uint32(u32) + } + return firstErr +} + +func (D DeqOptions) toOra(d *drv, opts *C.dpiDeqOptions) error { + var firstErr error + OK := func(ok C.int, msg string) bool { + if ok == C.DPI_SUCCESS { + return true + } + if firstErr == nil { + firstErr = errors.Errorf("%s: %w", msg, d.getError()) + } + return false + } + + cs := C.CString(D.Transformation) + OK(C.dpiDeqOptions_setTransformation(opts, cs, C.uint(len(D.Transformation))), "setTransformation") + C.free(unsafe.Pointer(cs)) + + cs = C.CString(D.Condition) + OK(C.dpiDeqOptions_setCondition(opts, cs, C.uint(len(D.Condition))), "setCondifion") + C.free(unsafe.Pointer(cs)) + + cs = C.CString(D.Consumer) + OK(C.dpiDeqOptions_setConsumerName(opts, cs, C.uint(len(D.Consumer))), "setConsumer") + C.free(unsafe.Pointer(cs)) + + cs = C.CString(D.Correlation) + OK(C.dpiDeqOptions_setCorrelation(opts, cs, C.uint(len(D.Correlation))), "setCorrelation") + C.free(unsafe.Pointer(cs)) + + OK(C.dpiDeqOptions_setDeliveryMode(opts, C.dpiMessageDeliveryMode(D.DeliveryMode)), "setDeliveryMode") + OK(C.dpiDeqOptions_setMode(opts, C.dpiDeqMode(D.Mode)), "setMode") + + cs = C.CString(D.MsgID) + OK(C.dpiDeqOptions_setMsgId(opts, cs, C.uint(len(D.MsgID))), "setMsgId") + C.free(unsafe.Pointer(cs)) + + OK(C.dpiDeqOptions_setNavigation(opts, C.dpiDeqNavigation(D.Navigation)), "setNavigation") + + OK(C.dpiDeqOptions_setVisibility(opts, C.dpiVisibility(D.Visibility)), "setVisibility") + + OK(C.dpiDeqOptions_setWait(opts, C.uint(D.Wait)), "setWait") + + return firstErr +} + +// SetDeqOptions sets all the dequeue options +func (Q *Queue) SetDeqOptions(D DeqOptions) error { + var opts *C.dpiDeqOptions + if C.dpiQueue_getDeqOptions(Q.dpiQueue, &opts) == C.DPI_FAILURE { + return errors.Errorf("getDeqOptions: %w", Q.conn.drv.getError()) + } + return D.toOra(Q.conn.drv, opts) +} + +// SetDeqCorrelation is a convenience function setting the Correlation DeqOption +func (Q *Queue) SetDeqCorrelation(correlation string) error { + var opts *C.dpiDeqOptions + if C.dpiQueue_getDeqOptions(Q.dpiQueue, &opts) == C.DPI_FAILURE { + return errors.Errorf("getDeqOptions: %w", Q.conn.drv.getError()) + } + cs := C.CString(correlation) + ok := C.dpiDeqOptions_setCorrelation(opts, cs, C.uint(len(correlation))) == C.DPI_FAILURE + C.free(unsafe.Pointer(cs)) + if !ok { + return errors.Errorf("setCorrelation: %w", Q.conn.drv.getError()) + } + return nil +} + +const ( + NoWait = uint32(0) + WaitForever = uint32(1<<31 - 1) +) + +// MessageState constants representing message's state. +type MessageState uint32 + +const ( + // MsgStateReady says that "The message is ready to be processed". + MsgStateReady = MessageState(C.DPI_MSG_STATE_READY) + // MsgStateWaiting says that "The message is waiting for the delay time to expire". + MsgStateWaiting = MessageState(C.DPI_MSG_STATE_WAITING) + // MsgStateProcessed says that "The message has already been processed and is retained". + MsgStateProcessed = MessageState(C.DPI_MSG_STATE_PROCESSED) + // MsgStateExpired says that "The message has been moved to the exception queue". + MsgStateExpired = MessageState(C.DPI_MSG_STATE_EXPIRED) +) + +// DeliveryMode constants for delivery modes. +type DeliveryMode uint32 + +const ( + // DeliverPersistent is to Dequeue only persistent messages from the queue. This is the default mode. + DeliverPersistent = DeliveryMode(C.DPI_MODE_MSG_PERSISTENT) + // DeliverBuffered is to Dequeue only buffered messages from the queue. + DeliverBuffered = DeliveryMode(C.DPI_MODE_MSG_BUFFERED) + // DeliverPersistentOrBuffered is to Dequeue both persistent and buffered messages from the queue. + DeliverPersistentOrBuffered = DeliveryMode(C.DPI_MODE_MSG_PERSISTENT_OR_BUFFERED) +) + +// Visibility constants represents visibility. +type Visibility uint32 + +const ( + // VisibleImmediate means that "The message is not part of the current transaction but constitutes a transaction of its own". + VisibleImmediate = Visibility(C.DPI_VISIBILITY_IMMEDIATE) + // VisibleOnCommit means that "The message is part of the current transaction. This is the default value". + VisibleOnCommit = Visibility(C.DPI_VISIBILITY_ON_COMMIT) +) + +// DeqMode constants for dequeue modes. +type DeqMode uint32 + +const ( + // DeqRemove reads the message and updates or deletes it. This is the default mode. Note that the message may be retained in the queue table based on retention properties. + DeqRemove = DeqMode(C.DPI_MODE_DEQ_REMOVE) + // DeqBrows reads the message without acquiring a lock on the message (equivalent to a SELECT statement). + DeqBrowse = DeqMode(C.DPI_MODE_DEQ_BROWSE) + // DeqLocked reads the message and obtain a write lock on the message (equivalent to a SELECT FOR UPDATE statement). + DeqLocked = DeqMode(C.DPI_MODE_DEQ_LOCKED) + // DeqPeek confirms receipt of the message but does not deliver the actual message content. + DeqPeek = DeqMode(C.DPI_MODE_DEQ_REMOVE_NO_DATA) +) + +// DeqNavigation constants for navigation. +type DeqNavigation uint32 + +const ( + // NavFirst retrieves the first available message that matches the search criteria. This resets the position to the beginning of the queue. + NavFirst = DeqNavigation(C.DPI_DEQ_NAV_FIRST_MSG) + // NavNext skips the remainder of the current transaction group (if any) and retrieves the first message of the next transaction group. This option can only be used if message grouping is enabled for the queue. + NavNextTran = DeqNavigation(C.DPI_DEQ_NAV_NEXT_TRANSACTION) + // NavNext Retrieves the next available message that matches the search criteria. This is the default method. + NavNext = DeqNavigation(C.DPI_DEQ_NAV_NEXT_MSG) +) diff --git a/vendor/gopkg.in/goracle.v2/rows.go b/vendor/github.com/godror/godror/rows.go similarity index 92% rename from vendor/gopkg.in/goracle.v2/rows.go rename to vendor/github.com/godror/godror/rows.go index 6b54d821f347..e870338d7733 100644 --- a/vendor/gopkg.in/goracle.v2/rows.go +++ b/vendor/github.com/godror/godror/rows.go @@ -1,19 +1,9 @@ // Copyright 2017 Tamás Gulácsi // // -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -package goracle +package godror /* #include "dpiImpl.h" @@ -30,7 +20,7 @@ import ( "time" "unsafe" - "github.com/pkg/errors" + errors "golang.org/x/xerrors" ) var _ = driver.Rows((*rows)(nil)) @@ -72,17 +62,16 @@ func (r *rows) Close() error { if r == nil { return nil } - r.columns = nil - r.data = nil - for _, v := range r.vars { - C.dpiVar_release(v) + vars, st := r.vars, r.statement + r.columns, r.vars, r.data, r.statement, r.nextRs = nil, nil, nil, nil, nil + for _, v := range vars[:cap(vars)] { + if v != nil { + C.dpiVar_release(v) + } } - r.vars = nil - if r.statement == nil { + if st == nil { return nil } - st := r.statement - r.statement = nil st.Lock() defer st.Unlock() @@ -91,7 +80,7 @@ func (r *rows) Close() error { } var err error if C.dpiStmt_release(st.dpiStmt) == C.DPI_FAILURE { - err = errors.Wrap(r.getError(), "rows/dpiStmt_release") + err = errors.Errorf("rows/dpiStmt_release: %w", r.getError()) } return err } @@ -266,6 +255,8 @@ func (r *rows) ColumnTypeScanType(index int) reflect.Type { // size as the Columns() are wide. // // Next should return io.EOF when there are no more rows. +// +// As with all Objects, you MUST call Close on the returned Object instances when they're not needed anymore! func (r *rows) Next(dest []driver.Value) error { if r.err != nil { return r.err @@ -280,7 +271,7 @@ func (r *rows) Next(dest []driver.Value) error { if r.fetched == 0 { var moreRows C.int if C.dpiStmt_fetchRows(r.dpiStmt, C.uint32_t(r.statement.FetchRowCount()), &r.bufferRowIndex, &r.fetched, &moreRows) == C.DPI_FAILURE { - return errors.Wrap(r.getError(), "Next") + return errors.Errorf("Next: %w", r.getError()) } if Log != nil { Log("msg", "fetched", "bri", r.bufferRowIndex, "fetched", r.fetched, "moreRows", moreRows, "len(data)", len(r.data), "cols", len(r.columns)) @@ -296,7 +287,7 @@ func (r *rows) Next(dest []driver.Value) error { var n C.uint32_t var data *C.dpiData if C.dpiVar_getReturnedData(r.vars[i], 0, &n, &data) == C.DPI_FAILURE { - return errors.Wrapf(r.getError(), "getReturnedData[%d]", i) + return errors.Errorf("getReturnedData[%d]: %w", i, r.getError()) } r.data[i] = (*[maxArraySize]C.dpiData)(unsafe.Pointer(data))[:n:n] //fmt.Printf("data %d=%+v\n%+v\n", n, data, r.data[i][0]) @@ -313,7 +304,7 @@ func (r *rows) Next(dest []driver.Value) error { typ := col.OracleType d := &r.data[i][r.bufferRowIndex] isNull := d.isNull == 1 - if Log != nil { + if false && Log != nil { Log("msg", "Next", "i", i, "row", r.bufferRowIndex, "typ", typ, "null", isNull) //, "data", fmt.Sprintf("%+v", d), "typ", typ) } @@ -352,7 +343,12 @@ func (r *rows) Next(dest []driver.Value) error { dest[i] = printFloat(float64(C.dpiData_getDouble(d))) default: b := C.dpiData_getBytes(d) - dest[i] = Number(C.GoStringN(b.ptr, C.int(b.length))) + s := C.GoStringN(b.ptr, C.int(b.length)) + if r.NumberAsString() { + dest[i] = s + } else { + dest[i] = Number(s) + } if Log != nil { Log("msg", "b", "i", i, "ptr", b.ptr, "length", b.length, "typ", col.NativeType, "int64", C.dpiData_getInt64(d), "dest", dest[i]) } @@ -436,7 +432,7 @@ func (r *rows) Next(dest []driver.Value) error { C.DPI_NATIVE_TYPE_LOB: isClob := typ == C.DPI_ORACLE_TYPE_CLOB || typ == C.DPI_ORACLE_TYPE_NCLOB if isNull { - if isClob && r.ClobAsString() { + if isClob && (r.ClobAsString() || !r.LobAsReader()) { dest[i] = "" } else { dest[i] = nil @@ -444,9 +440,11 @@ func (r *rows) Next(dest []driver.Value) error { continue } rdr := &dpiLobReader{dpiLob: C.dpiData_getLOB(d), conn: r.conn, IsClob: isClob} - if isClob && r.ClobAsString() { + if isClob && (r.ClobAsString() || !r.LobAsReader()) { sb := stringBuilders.Get() - if _, err := io.Copy(sb, rdr); err != nil { + _, err := io.Copy(sb, rdr) + C.dpiLob_close(rdr.dpiLob) + if err != nil { stringBuilders.Put(sb) return err } @@ -466,7 +464,7 @@ func (r *rows) Next(dest []driver.Value) error { } var colCount C.uint32_t if C.dpiStmt_getNumQueryColumns(st.dpiStmt, &colCount) == C.DPI_FAILURE { - return errors.Wrap(r.getError(), "getNumQueryColumns") + return errors.Errorf("getNumQueryColumns: %w", r.getError()) } st.Lock() r2, err := st.openRows(int(colCount)) @@ -488,7 +486,7 @@ func (r *rows) Next(dest []driver.Value) error { dest[i] = nil continue } - o, err := wrapObject(r.drv, col.ObjectType, C.dpiData_getObject(d)) + o, err := wrapObject(r.conn, col.ObjectType, C.dpiData_getObject(d)) if err != nil { return err } @@ -503,6 +501,10 @@ func (r *rows) Next(dest []driver.Value) error { r.bufferRowIndex++ r.fetched-- + if Log != nil { + Log("msg", "scanned", "row", r.bufferRowIndex, "dest", dest) + } + return nil } @@ -562,7 +564,7 @@ func (r *rows) getImplicitResult() { r.origSt = st } if C.dpiStmt_getImplicitResult(st.dpiStmt, &r.nextRs) == C.DPI_FAILURE { - r.nextRsErr = errors.Wrap(r.getError(), "getImplicitResult") + r.nextRsErr = errors.Errorf("getImplicitResult: %w", r.getError()) } } func (r *rows) HasNextResultSet() bool { @@ -586,14 +588,14 @@ func (r *rows) NextResultSet() error { return r.nextRsErr } if r.nextRs == nil { - return errors.Wrap(io.EOF, "getImplicitResult") + return errors.Errorf("getImplicitResult: %w", io.EOF) } } st := &statement{conn: r.conn, dpiStmt: r.nextRs} var n C.uint32_t if C.dpiStmt_getNumQueryColumns(st.dpiStmt, &n) == C.DPI_FAILURE { - return errors.Wrapf(io.EOF, "getNumQueryColumns: %v", r.getError()) + return errors.Errorf("getNumQueryColumns: %w: %w", r.getError(), io.EOF) } // keep the originam statement for the succeeding NextResultSet calls. nr, err := st.openRows(int(n)) diff --git a/vendor/github.com/godror/godror/sid/sid.go b/vendor/github.com/godror/godror/sid/sid.go new file mode 100644 index 000000000000..b1fb0953142b --- /dev/null +++ b/vendor/github.com/godror/godror/sid/sid.go @@ -0,0 +1,531 @@ +// Copyright 2019 Tamás Gulácsi +// +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LIENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR ONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sid + +import ( + "fmt" + "io" + "strconv" + "strings" + "unicode" + + errors "golang.org/x/xerrors" +) + +// Statement can Parse and Print Oracle connection descriptor (DESRIPTION=(ADDRESS=...)) format. +// It can be used to parse or build a SID. +// +// See https://docs.oracle.com/cd/B28359_01/network.111/b28317/tnsnames.htm#NETRF271 +type Statement struct { + Name, Value string + Statements []Statement +} + +func (cs Statement) String() string { + var buf strings.Builder + cs.Print(&buf, "\n", " ") + return buf.String() +} +func (cs Statement) Print(w io.Writer, prefix, indent string) { + fmt.Fprintf(w, "%s(%s=%s", prefix, cs.Name, cs.Value) + if cs.Value == "" { + for _, s := range cs.Statements { + s.Print(w, prefix+indent, indent) + } + } + io.WriteString(w, ")") +} + +func ParseConnDescription(s string) (Statement, error) { + var cs Statement + _, err := cs.Parse(s) + return cs, err +} +func (cs *Statement) Parse(s string) (string, error) { + ltrim := func(s string) string { return strings.TrimLeftFunc(s, unicode.IsSpace) } + s = ltrim(s) + if s == "" || s[0] != '(' { + return s, nil + } + i := strings.IndexByte(s[1:], '=') + 1 + if i <= 0 || strings.Contains(s[1:i], ")") { + return s, errors.Errorf("no = after ( in %q", s) + } + cs.Name = s[1:i] + s = ltrim(s[i+1:]) + + if s == "" { + return s, nil + } + if s[0] != '(' { + if i = strings.IndexByte(s, ')'); i < 0 || strings.Contains(s[1:i], "(") { + return s, errors.Errorf("no ) after = in %q", s) + } + cs.Value = s[:i] + s = ltrim(s[i+1:]) + return s, nil + } + + for s != "" && s[0] == '(' { + var sub Statement + var err error + if s, err = sub.Parse(s); err != nil { + return s, err + } + if sub.Name == "" { + break + } + cs.Statements = append(cs.Statements, sub) + } + s = ltrim(s) + if s != "" && s[0] == ')' { + s = ltrim(s[1:]) + } + return s, nil +} + +type DescriptionList struct { + Options ListOptions + Descriptions []Description + TypeOfService string +} + +func (cd DescriptionList) Print(w io.Writer, prefix, indent string) { + io.WriteString(w, prefix+"(DESCRIPTION_LIST=") + cd.Options.Print(w, prefix, indent) + for _, d := range cd.Descriptions { + d.Print(w, prefix, indent) + } + if cd.TypeOfService != "" { + fmt.Fprintf(w, "%s(TYPE_OF_SERVICE=%s)", prefix, cd.TypeOfService) + } + io.WriteString(w, ")") +} +func (cd *DescriptionList) Parse(ss []Statement) error { + if len(ss) == 1 && ss[0].Name == "DESCRIPTION_LIST" { + ss = ss[0].Statements + } + cd.TypeOfService = "" + if err := cd.Options.Parse(ss); err != nil { + return err + } + cd.Descriptions = cd.Descriptions[:0] + for _, s := range ss { + switch s.Name { + case "DESCRIPTION": + var d Description + if err := d.Parse(s.Statements); err != nil { + return err + } + cd.Descriptions = append(cd.Descriptions, d) + case "TYPE_OF_SERVICE": + cd.TypeOfService = s.Value + } + } + return cd.Options.Parse(ss) +} + +type Description struct { + TCPKeepAlive bool + SDU int + Bufs BufSizes + Options ListOptions + Addresses []Address + AddressList AddressList + ConnectData ConnectData + TypeOfService string + Security Security +} + +func (d Description) Print(w io.Writer, prefix, indent string) { + if d.IsZero() { + return + } + io.WriteString(w, prefix+"(DESCRIPTION=") + if d.TCPKeepAlive { + io.WriteString(w, prefix+"(ENABLE=broken)") + } + if d.SDU != 0 { + fmt.Fprintf(w, prefix+"(SDU=%d)", d.SDU) + } + d.Bufs.Print(w, prefix, indent) + d.Options.Print(w, prefix, indent) + for _, a := range d.Addresses { + a.Print(w, prefix, indent) + } + d.AddressList.Print(w, prefix, indent) + d.ConnectData.Print(w, prefix, indent) + if d.TypeOfService != "" { + fmt.Fprintf(w, "%s(TYPE_OF_SERVICE=%s)", prefix, d.TypeOfService) + } + d.Security.Print(w, prefix, indent) + io.WriteString(w, ")") +} +func (d Description) IsZero() bool { + return !d.TCPKeepAlive && d.SDU == 0 && d.Bufs.IsZero() && d.Options.IsZero() && len(d.Addresses) == 0 && d.AddressList.IsZero() && d.ConnectData.IsZero() && d.TypeOfService == "" && d.Security.IsZero() +} +func (d *Description) Parse(ss []Statement) error { + if len(ss) == 1 && ss[0].Name == "DESCRIPTION" { + ss = ss[0].Statements + } + d.TCPKeepAlive, d.SDU = false, 0 + for _, s := range ss { + switch s.Name { + case "ADDRESS": + var a Address + if err := a.Parse(s.Statements); err != nil { + return err + } + if !a.IsZero() { + d.Addresses = append(d.Addresses, a) + } + case "ADDRESS_LIST": + if err := d.AddressList.Parse(s.Statements); err != nil { + return err + } + case "CONNECT_DATA": + if err := d.ConnectData.Parse(s.Statements); err != nil { + return err + } + case "ENABLE": + d.TCPKeepAlive = d.TCPKeepAlive || s.Value == "broken" + case "SDU": + var err error + if d.SDU, err = strconv.Atoi(s.Value); err != nil { + return err + } + case "SECURITY": + if err := d.Security.Parse(s.Statements); err != nil { + return err + } + } + } + if err := d.Bufs.Parse(ss); err != nil { + return err + } + if err := d.Options.Parse(ss); err != nil { + return err + } + return nil +} + +type Address struct { + Protocol, Host string + Port int + BufSizes +} + +func (a Address) Print(w io.Writer, prefix, indent string) { + if a.IsZero() { + return + } + io.WriteString(w, prefix+"(ADDRESS=") + if a.Protocol != "" { + fmt.Fprintf(w, "%s(PROTOCOL=%s)", prefix, a.Protocol) + } + if a.Host != "" { + fmt.Fprintf(w, "%s(HOST=%s)", prefix, a.Host) + } + if a.Port != 0 { + fmt.Fprintf(w, "%s(PORT=%d)", prefix, a.Port) + } + a.BufSizes.Print(w, prefix, indent) + io.WriteString(w, ")") +} +func (a Address) IsZero() bool { + return a.Protocol == "" && a.Host == "" && a.Port == 0 && a.BufSizes.IsZero() +} +func (a *Address) Parse(ss []Statement) error { + if len(ss) == 1 && ss[0].Name == "ADDRESS" { + ss = ss[0].Statements + } + for _, s := range ss { + switch s.Name { + case "PROTOCOL": + a.Protocol = s.Value + case "HOST": + a.Host = s.Value + case "PORT": + i, err := strconv.Atoi(s.Value) + if err != nil { + return err + } + a.Port = i + } + } + return a.BufSizes.Parse(ss) +} + +type BufSizes struct { + RecvBufSize, SendBufSize int +} + +func (bs BufSizes) Print(w io.Writer, prefix, indent string) { + if bs.RecvBufSize > 0 { + fmt.Fprintf(w, "%s(RECV_BUF_SIZE=%d)", prefix, bs.RecvBufSize) + } + if bs.SendBufSize > 0 { + fmt.Fprintf(w, "%s(SEND_BUF_SIZE=%d)", prefix, bs.SendBufSize) + } +} +func (bs BufSizes) IsZero() bool { return bs.RecvBufSize > 0 && bs.SendBufSize > 0 } +func (bs *BufSizes) Parse(ss []Statement) error { + for _, s := range ss { + switch s.Name { + case "RECV_BUF_SIZE", "SEND_BUF_SIZE": + i, err := strconv.Atoi(s.Value) + if err != nil { + return err + } + if s.Name == "RECV_BUF_SIZE" { + bs.RecvBufSize = i + } else { + bs.SendBufSize = i + } + } + } + return nil +} + +type ListOptions struct { + Failover, LoadBalance, SourceRoute bool +} + +func (lo ListOptions) Print(w io.Writer, prefix, indent string) { + if lo.Failover { + io.WriteString(w, prefix+"(FAILOVER=on)") + } + if lo.LoadBalance { + io.WriteString(w, prefix+"(LOAD_BALANE=on)") + } + if lo.SourceRoute { + io.WriteString(w, prefix+"(SOURE_ROUTE=on)") + } +} +func (lo ListOptions) IsZero() bool { return !lo.Failover && !lo.LoadBalance && !lo.SourceRoute } +func s2b(s string) bool { return s == "on" || s == "yes" || s == "true" } +func (lo *ListOptions) Parse(ss []Statement) error { + *lo = ListOptions{} + for _, s := range ss { + switch s.Name { + case "FAILOVER": + lo.Failover = s2b(s.Value) + case "LOAD_BALANE": + lo.LoadBalance = s2b(s.Value) + case "SourceRoute": + lo.SourceRoute = s2b(s.Value) + } + } + return nil +} + +type AddressList struct { + Options ListOptions + Addresses []Address +} + +func (al AddressList) Print(w io.Writer, prefix, indent string) { + if al.IsZero() { + return + } + io.WriteString(w, prefix+"(ADDRESS_LIST=") + al.Options.Print(w, prefix, indent) + for _, a := range al.Addresses { + a.Print(w, prefix, indent) + } + io.WriteString(w, ")") +} +func (al AddressList) IsZero() bool { return al.Options.IsZero() && len(al.Addresses) == 0 } +func (al *AddressList) Parse(ss []Statement) error { + if len(ss) == 1 && ss[0].Name == "ADDRESS_LIST" { + ss = ss[0].Statements + } + if err := al.Options.Parse(ss); err != nil { + return err + } + al.Addresses = al.Addresses[:0] + for _, s := range ss { + switch s.Name { + case "ADDRESS": + var a Address + if err := a.Parse(s.Statements); err != nil { + return err + } + if !a.IsZero() { + al.Addresses = append(al.Addresses, a) + } + } + } + return nil +} + +type ConnectData struct { + FailoverMode FailoverMode + ServiceName, SID string + GlobalName, InstanceName, RDBDatabase string + Hs bool + Server ServiceHandler +} + +func (cd ConnectData) Print(w io.Writer, prefix, indent string) { + if cd.IsZero() { + return + } + io.WriteString(w, prefix+"(CONNECT_DATA=") + cd.FailoverMode.Print(w, prefix, indent) + if cd.GlobalName != "" { + fmt.Fprintf(w, "%s(GLOBAL_NAME=%s)", prefix, cd.GlobalName) + } + if cd.InstanceName != "" { + fmt.Fprintf(w, "%s(INSTANCE_NAME=%s)", prefix, cd.InstanceName) + } + if cd.RDBDatabase != "" { + fmt.Fprintf(w, "%s(RDB_DATABASE=%s)", prefix, cd.RDBDatabase) + } + if cd.ServiceName != "" { + fmt.Fprintf(w, "%s(SERVICE_NAME=%s)", prefix, cd.ServiceName) + } + if cd.SID != "" { + fmt.Fprintf(w, "%s(SID=%s)", prefix, cd.SID) + } + if cd.Hs { + io.WriteString(w, prefix+"(HS=ok)") + } + if cd.Server != "" { + fmt.Fprintf(w, "%s(SERVER=%s)", prefix, cd.Server) + } + io.WriteString(w, ")") +} +func (cd ConnectData) IsZero() bool { + return cd.FailoverMode.IsZero() && cd.GlobalName == "" && cd.InstanceName == "" && cd.RDBDatabase == "" && cd.ServiceName == "" && cd.SID == "" && !cd.Hs && cd.Server == "" +} +func (cd *ConnectData) Parse(ss []Statement) error { + if len(ss) == 1 && ss[0].Name == "CONNECT_DATA" { + ss = ss[0].Statements + } + cd.Hs = false + for _, s := range ss { + switch s.Name { + case "FAILOVER_MODE": + if err := cd.FailoverMode.Parse(s.Statements); err != nil { + return err + } + case "GLOBAL_NAME": + cd.GlobalName = s.Value + case "INSTANCE_NAME": + cd.InstanceName = s.Value + case "RDB_DATABASE": + cd.RDBDatabase = s.Value + case "SERVICE_NAME": + cd.ServiceName = s.Value + case "SID": + cd.SID = s.Value + case "HS": + cd.Hs = s.Value == "ok" + case "SERVER": + cd.Server = ServiceHandler(s.Value) + } + } + return nil +} + +type FailoverMode struct { + Backup, Type, Method string + Retry, Delay int +} + +func (fo FailoverMode) Print(w io.Writer, prefix, indent string) { + if fo.IsZero() { + return + } + io.WriteString(w, prefix+"(FAILOVER_MODE=") + if fo.Backup != "" { + fmt.Fprintf(w, "%s(BACKUP=%s)", prefix, fo.Backup) + } + if fo.Type != "" { + fmt.Fprintf(w, "%s(TYPE=%s)", prefix, fo.Type) + } + if fo.Method != "" { + fmt.Fprintf(w, "%s(METHOD=%s)", prefix, fo.Method) + } + if fo.Retry != 0 { + fmt.Fprintf(w, "%s(RETRY=%d)", prefix, fo.Retry) + } + if fo.Delay != 0 { + fmt.Fprintf(w, "%s(DELAY=%d)", prefix, fo.Delay) + } + io.WriteString(w, ")") +} +func (fo FailoverMode) IsZero() bool { + return fo.Backup == "" && fo.Type == "" && fo.Method == "" && fo.Retry == 0 && fo.Delay == 0 +} +func (fo *FailoverMode) Parse(ss []Statement) error { + if len(ss) == 1 && ss[0].Name == "FAILOVER_MODE" { + ss = ss[0].Statements + } + for _, s := range ss { + switch s.Name { + case "BACKUP": + fo.Backup = s.Value + case "TYPE": + fo.Type = s.Value + case "METHOD": + fo.Method = s.Value + case "RETRY", "DELAY": + i, err := strconv.Atoi(s.Value) + if err != nil { + return err + } + if s.Name == "RETRY" { + fo.Retry = i + } else { + fo.Delay = i + } + } + } + return nil +} + +type ServiceHandler string + +const ( + Dedicated = ServiceHandler("dedicated") + Shared = ServiceHandler("shared") + Pooled = ServiceHandler("pooled") +) + +type Security struct { + SSLServerCertDN string +} + +func (sec Security) Print(w io.Writer, prefix, indent string) { + if sec.SSLServerCertDN != "" { + fmt.Fprintf(w, "%s(SECURITY=(SSL_SERVER_CERT_DN=%s))", prefix, sec.SSLServerCertDN) + } +} +func (sec Security) IsZero() bool { return sec.SSLServerCertDN == "" } +func (sec *Security) Parse(ss []Statement) error { + if len(ss) == 1 && ss[0].Name == "SECURITY" { + ss = ss[0].Statements + } + sec.SSLServerCertDN = "" + for _, s := range ss { + if s.Name == "SSL_SERVER_CERT_DN" { + sec.SSLServerCertDN = s.Value + } + } + return nil +} diff --git a/vendor/gopkg.in/goracle.v2/stmt.go b/vendor/github.com/godror/godror/stmt.go similarity index 87% rename from vendor/gopkg.in/goracle.v2/stmt.go rename to vendor/github.com/godror/godror/stmt.go index 6e8e6215365c..3a2f6d291a17 100644 --- a/vendor/gopkg.in/goracle.v2/stmt.go +++ b/vendor/github.com/godror/godror/stmt.go @@ -1,25 +1,24 @@ // Copyright 2017 Tamás Gulácsi // // -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -package goracle +package godror /* #include #include "dpiImpl.h" const int sizeof_dpiData = sizeof(void); + +void godror_setFromString(dpiVar *dv, uint32_t pos, const _GoString_ value) { + uint32_t length; + length = _GoStringLen(value); + if( length == 0 ) { + return; + } + dpiVar_setFromBytes(dv, pos, _GoStringPtr(value), length); +} */ import "C" import ( @@ -30,11 +29,12 @@ import ( "io" "reflect" "strconv" + "strings" "sync" "time" "unsafe" - "github.com/pkg/errors" + errors "golang.org/x/xerrors" ) type stmtOptions struct { @@ -45,6 +45,7 @@ type stmtOptions struct { plSQLArrays bool lobAsReader bool magicTypeConversion bool + numberAsString bool } func (o stmtOptions) ExecMode() C.dpiExecMode { @@ -55,8 +56,10 @@ func (o stmtOptions) ExecMode() C.dpiExecMode { } func (o stmtOptions) ArraySize() int { - if o.arraySize <= 0 || o.arraySize > 32<<10 { + if o.arraySize <= 0 { return DefaultArraySize + } else if o.arraySize > 1<<16 { + return 1 << 16 } return o.arraySize } @@ -72,6 +75,7 @@ func (o stmtOptions) ClobAsString() bool { return !o.lobAsReader } func (o stmtOptions) LobAsReader() bool { return o.lobAsReader } func (o stmtOptions) MagicTypeConversion() bool { return o.magicTypeConversion } +func (o stmtOptions) NumberAsString() bool { return o.numberAsString } // Option holds statement options. type Option func(*stmtOptions) @@ -135,6 +139,11 @@ func MagicTypeConversion() Option { return func(o *stmtOptions) { o.magicTypeConversion = true } } +// NumberAsString returns an option to return numbers as string, not Number. +func NumberAsString() Option { + return func(o *stmtOptions) { o.numberAsString = true } +} + // CallTimeout sets the round-trip timeout (OCI_ATTR_CALL_TIMEOUT). // // See https://docs.oracle.com/en/database/oracle/oracle-database/18/lnoci/handle-and-descriptor-attributes.html#GUID-D8EE68EB-7E38-4068-B06E-DF5686379E5E @@ -178,36 +187,16 @@ func (st *statement) Close() error { st.Lock() defer st.Unlock() - return st.close() + return st.close(false) } -func (st *statement) close() error { +func (st *statement) close(keepDpiStmt bool) error { if st == nil { return nil } - dpiStmt := st.dpiStmt - c := st.conn - st.cleanup() - var si C.dpiStmtInfo - if dpiStmt != nil && - C.dpiStmt_getInfo(dpiStmt, &si) != C.DPI_FAILURE && // this is just to check the validity of dpiStmt, to avoid SIGSEGV - C.dpiStmt_release(dpiStmt) != C.DPI_FAILURE { - return nil - } - if c == nil { - return driver.ErrBadConn - } - return errors.Wrap(c.getError(), "statement/dpiStmt_release") -} - -func (st *statement) cleanup() error { - if st == nil { - return nil - } - - for _, v := range st.vars { - C.dpiVar_release(v) - } + c, dpiStmt, vars := st.conn, st.dpiStmt, st.vars + st.isSlice = nil + st.query = "" st.data = nil st.vars = nil st.varInfos = nil @@ -215,13 +204,29 @@ func (st *statement) cleanup() error { st.dests = nil st.columns = nil st.dpiStmt = nil - c := st.conn st.conn = nil + for _, v := range vars[:cap(vars)] { + if v != nil { + C.dpiVar_release(v) + } + } + + if !keepDpiStmt { + var si C.dpiStmtInfo + if dpiStmt != nil && + C.dpiStmt_getInfo(dpiStmt, &si) != C.DPI_FAILURE && // this is just to check the validity of dpiStmt, to avoid SIGSEGV + C.dpiStmt_release(dpiStmt) != C.DPI_FAILURE { + return nil + } + } if c == nil { return driver.ErrBadConn } - return errors.Wrap(c.getError(), "statement/dpiStmt_release") + if err := c.getError(); err != nil { + return errors.Errorf("statement/dpiStmt_release: %w", err) + } + return nil } // Exec executes a query that doesn't return rows, such @@ -253,6 +258,8 @@ func (st *statement) Query(args []driver.Value) (driver.Rows, error) { // ExecContext executes a query that doesn't return rows, such as an INSERT or UPDATE. // // ExecContext must honor the context timeout and return when it is canceled. +// +// Cancelation/timeout is honored, execution is broken, but you may have to disable out-of-bound execution - see https://github.com/oracle/odpi/issues/116 for details. func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) (res driver.Result, err error) { if err = ctx.Err(); err != nil { return nil, err @@ -262,9 +269,10 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) closeIfBadConn := func(err error) error { if err != nil && err == driver.ErrBadConn { if Log != nil { - Log("error", driver.ErrBadConn) + Log("error", err) } - st.close() + st.close(false) + st.conn.close(true) } return err } @@ -277,8 +285,8 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) } st.isReturning = false - st.conn.RLock() - defer st.conn.RUnlock() + st.conn.mu.RLock() + defer st.conn.mu.RUnlock() // bind variables if err = st.bindVars(args, Log); err != nil { @@ -296,6 +304,7 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) // execute go func() { defer close(done) + var err error Loop: for i := 0; i < 3; i++ { if err = ctx.Err(); err != nil { @@ -328,15 +337,13 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) if err == nil { var info C.dpiStmtInfo if C.dpiStmt_getInfo(st.dpiStmt, &info) == C.DPI_FAILURE { - err = errors.Wrap(st.getError(), "getInfo") + err = errors.Errorf("getInfo: %w", st.getError()) } st.isReturning = info.isReturning != 0 - return + break } - cdr, ok := errors.Cause(err).(interface { - Code() int - }) - if !ok { + var cdr interface{ Code() int } + if !errors.As(err, &cdr) { break } switch code := cdr.Code(); code { @@ -349,7 +356,11 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) } break } - done <- maybeBadConn(errors.Wrapf(err, "dpiStmt_execute(mode=%d arrLen=%d)", mode, st.arrLen)) + if err == nil { + done <- nil + return + } + done <- maybeBadConn(errors.Errorf("dpiStmt_execute(mode=%d arrLen=%d): %w", mode, st.arrLen, err), nil) }() select { @@ -369,8 +380,13 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) Log("msg", "BREAK statement") } _ = st.Break() - st.cleanup() - return nil, driver.ErrBadConn + // For some reasons this SIGSEGVs if not not keepDpiStmt (try to close it), + st.close(true) + // so we hope that the following conn.Close closes the dpiStmt, too. + if err := st.conn.Close(); err != nil { + return nil, err + } + return nil, ctx.Err() } } @@ -386,7 +402,7 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) data := &st.data[i][0] if C.dpiVar_getReturnedData(st.vars[i], 0, &n, &data) == C.DPI_FAILURE { err = st.getError() - return nil, errors.Wrapf(closeIfBadConn(err), "%d.getReturnedData", i) + return nil, errors.Errorf("%d.getReturnedData: %w", i, closeIfBadConn(err)) } if n == 0 { st.data[i] = st.data[i][:0] @@ -400,7 +416,7 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) if Log != nil { Log("get", i, "error", err) } - return nil, errors.Wrapf(closeIfBadConn(err), "%d. get[%d]", i, 0) + return nil, errors.Errorf("%d. get[%d]: %w", i, 0, closeIfBadConn(err)) } continue } @@ -410,14 +426,14 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) if Log != nil { Log("msg", "getNumElementsInArray", "i", i, "error", err) } - return nil, errors.Wrapf(closeIfBadConn(err), "%d.getNumElementsInArray", i) + return nil, errors.Errorf("%d.getNumElementsInArray: %w", i, closeIfBadConn(err)) } //fmt.Printf("i=%d dest=%T %#v\n", i, dest, dest) if err = get(dest, st.data[i][:n]); err != nil { if Log != nil { Log("msg", "get", "i", i, "n", n, "error", err) } - return nil, errors.Wrapf(closeIfBadConn(err), "%d. get", i) + return nil, errors.Errorf("%d. get: %w", i, closeIfBadConn(err)) } } var count C.uint64_t @@ -430,6 +446,8 @@ func (st *statement) ExecContext(ctx context.Context, args []driver.NamedValue) // QueryContext executes a query that may return rows, such as a SELECT. // // QueryContext must honor the context timeout and return when it is canceled. +// +// Cancelation/timeout is honored, execution is broken, but you may have to disable out-of-bound execution - see https://github.com/oracle/odpi/issues/116 for details. func (st *statement) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) { if err := ctx.Err(); err != nil { return nil, err @@ -438,7 +456,11 @@ func (st *statement) QueryContext(ctx context.Context, args []driver.NamedValue) closeIfBadConn := func(err error) error { if err != nil && err == driver.ErrBadConn { - st.close() + if Log != nil { + Log("error", err) + } + st.close(false) + st.conn.close(true) } return err } @@ -446,8 +468,8 @@ func (st *statement) QueryContext(ctx context.Context, args []driver.NamedValue) st.Lock() defer st.Unlock() st.isReturning = false - st.conn.RLock() - defer st.conn.RUnlock() + st.conn.mu.RLock() + defer st.conn.mu.RUnlock() switch st.query { case getConnection: @@ -469,6 +491,12 @@ func (st *statement) QueryContext(ctx context.Context, args []driver.NamedValue) return nil, closeIfBadConn(err) } + mode := st.ExecMode() + //fmt.Printf("%p.%p: inTran? %t\n%s\n", st.conn, st, st.inTransaction, st.query) + if !st.inTransaction { + mode |= C.DPI_MODE_EXEC_COMMIT_ON_SUCCESS + } + // execute var colCount C.uint32_t done := make(chan error, 1) @@ -481,7 +509,7 @@ func (st *statement) QueryContext(ctx context.Context, args []driver.NamedValue) return } st.setCallTimeout(ctx) - if C.dpiStmt_execute(st.dpiStmt, st.ExecMode(), &colCount) != C.DPI_FAILURE { + if C.dpiStmt_execute(st.dpiStmt, mode, &colCount) != C.DPI_FAILURE { break } if err = ctx.Err(); err == nil { @@ -491,7 +519,11 @@ func (st *statement) QueryContext(ctx context.Context, args []driver.NamedValue) } } } - done <- maybeBadConn(errors.Wrap(err, "dpiStmt_execute")) + if err == nil { + done <- nil + return + } + done <- maybeBadConn(errors.Errorf("dpiStmt_execute: %w", err), nil) }() select { @@ -510,8 +542,13 @@ func (st *statement) QueryContext(ctx context.Context, args []driver.NamedValue) Log("msg", "BREAK query") } _ = st.Break() - st.cleanup() - return nil, driver.ErrBadConn + // For some reasons this SIGSEGVs if not not keepDpiStmt (try to close it), + st.close(true) + // so we hope that the following conn.Close closes the dpiStmt, too. + if err := st.conn.Close(); err != nil { + return nil, err + } + return nil, ctx.Err() } } rows, err := st.openRows(int(colCount)) @@ -539,10 +576,6 @@ func (st *statement) NumInput() int { return 0 } - if !go10 { - return -1 - } - st.Lock() defer st.Unlock() var cnt C.uint32_t @@ -587,12 +620,10 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { if Log != nil { Log("enter", "bindVars", "args", args) } - if cap(st.vars) < len(args) || cap(st.varInfos) < len(args) { - for i, v := range st.vars { - if v != nil { - C.dpiVar_release(v) - st.vars[i], st.varInfos[i] = nil, varInfo{} - } + for i, v := range st.vars[:cap(st.vars)] { + if v != nil { + C.dpiVar_release(v) + st.vars[i], st.varInfos[i] = nil, varInfo{} } } var named bool @@ -713,7 +744,7 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { var err error if value, err = st.bindVarTypeSwitch(info, &(st.gets[i]), value); err != nil { - return errors.Wrapf(err, "%d. arg", i+1) + return errors.Errorf("%d. arg: %w", i+1, err) } var rv reflect.Value @@ -742,12 +773,8 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { return errors.Errorf("maximum array size allowed is %d", maxArraySize) } if st.vars[i] == nil || st.data[i] == nil || st.varInfos[i] != vi { - if st.vars[i] != nil { - C.dpiVar_release(st.vars[i]) - st.vars[i] = nil - } if st.vars[i], st.data[i], err = st.newVar(vi); err != nil { - return errors.WithMessage(err, fmt.Sprintf("%d", i)) + return errors.Errorf("%d: %w", i, err) } st.varInfos[i] = vi } @@ -760,7 +787,7 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { Log("C", "dpiVar_setNumElementsInArray", "i", i, "n", 0) } if C.dpiVar_setNumElementsInArray(dv, C.uint32_t(0)) == C.DPI_FAILURE { - return errors.Wrapf(st.getError(), "setNumElementsInArray[%d](%d)", i, 0) + return errors.Errorf("setNumElementsInArray[%d](%d): %w", i, 0, st.getError()) } } continue @@ -771,7 +798,7 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { Log("msg", "set", "i", i, "value", fmt.Sprintf("%T=%#v", value, value)) } if err := info.set(dv, data[:1], value); err != nil { - return errors.Wrapf(err, "set(data[%d][%d], %#v (%T))", i, 0, value, value) + return errors.Errorf("set(data[%d][%d], %#v (%T)): %w", i, 0, value, value, err) } continue } @@ -783,7 +810,7 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { Log("C", "dpiVar_setNumElementsInArray", "i", i, "n", n) } if C.dpiVar_setNumElementsInArray(dv, C.uint32_t(n)) == C.DPI_FAILURE { - return errors.Wrapf(st.getError(), "%+v.setNumElementsInArray[%d](%d)", dv, i, n) + return errors.Errorf("%+v.setNumElementsInArray[%d](%d): %w", dv, i, n, st.getError()) } } //fmt.Println("n:", len(st.data[i])) @@ -796,7 +823,7 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { for i, v := range st.vars { //if Log != nil {Log("C", "dpiStmt_bindByPos", "dpiStmt", st.dpiStmt, "i", i, "v", v) } if C.dpiStmt_bindByPos(st.dpiStmt, C.uint32_t(i+1), v) == C.DPI_FAILURE { - return errors.Wrapf(st.getError(), "bindByPos[%d]", i) + return errors.Errorf("bindByPos[%d]: %w", i, st.getError()) } } return nil @@ -811,7 +838,7 @@ func (st *statement) bindVars(args []driver.NamedValue, Log logFunc) error { res := C.dpiStmt_bindByName(st.dpiStmt, cName, C.uint32_t(len(name)), st.vars[i]) C.free(unsafe.Pointer(cName)) if res == C.DPI_FAILURE { - return errors.Wrapf(st.getError(), "bindByName[%q]", name) + return errors.Errorf("bindByName[%q]: %w", name, st.getError()) } } return nil @@ -835,7 +862,7 @@ func (st *statement) bindVarTypeSwitch(info *argInfo, get *dataGetter, value int if isValuer { var err error if value, err = vlr.Value(); err != nil { - return value, errors.Wrap(err, "arg.Value()") + return value, errors.Errorf("arg.Value(): %w", err) } return st.bindVarTypeSwitch(info, get, value) } @@ -1017,7 +1044,7 @@ func (st *statement) bindVarTypeSwitch(info *argInfo, get *dataGetter, value int } info.set = dataSetBytes if info.isOut { - info.bufSize = 4000 + info.bufSize = 32767 *get = dataGetBytes } @@ -1073,8 +1100,10 @@ func (st *statement) bindVarTypeSwitch(info *argInfo, get *dataGetter, value int } case *Object: - info.objType = v.ObjectType.dpiObjectType - info.typ, info.natTyp = C.DPI_ORACLE_TYPE_OBJECT, C.DPI_NATIVE_TYPE_OBJECT + if !nilPtr && v != nil { + info.objType = v.ObjectType.dpiObjectType + info.typ, info.natTyp = C.DPI_ORACLE_TYPE_OBJECT, C.DPI_NATIVE_TYPE_OBJECT + } info.set = st.dataSetObject if info.isOut { *get = st.dataGetObject @@ -1094,7 +1123,7 @@ func (st *statement) bindVarTypeSwitch(info *argInfo, get *dataGetter, value int } var err error if value, err = vlr.Value(); err != nil { - return value, errors.Wrap(err, "arg.Value()") + return value, errors.Errorf("arg.Value(): %w", err) } return st.bindVarTypeSwitch(info, get, value) } @@ -1139,6 +1168,13 @@ func dataSetBool(dv *C.dpiVar, data []C.dpiData, vv interface{}) error { return dataSetNull(dv, data, nil) } b := C.int(0) + if v, ok := vv.(bool); ok { + if v { + b = 1 + } + C.dpiData_setBool(&data[0], b) + return nil + } if bb, ok := vv.([]bool); ok { for i, v := range bb { if v { @@ -1146,10 +1182,10 @@ func dataSetBool(dv *C.dpiVar, data []C.dpiData, vv interface{}) error { } C.dpiData_setBool(&data[i], b) } - } else { - for i := range data { - data[i].isNull = 1 - } + return nil + } + for i := range data { + data[i].isNull = 1 } return nil } @@ -1507,18 +1543,27 @@ func dataGetBytes(v interface{}, data []C.dpiData) error { *x = nil return nil } - b := C.dpiData_getBytes(&data[0]) + db := C.dpiData_getBytes(&data[0]) + b := ((*[32767]byte)(unsafe.Pointer(db.ptr)))[:db.length:db.length] + // b must be copied + *x = append((*x)[:0], b...) - *x = ((*[32767]byte)(unsafe.Pointer(b.ptr)))[:b.length:b.length] case *[][]byte: + maX := (*x)[:cap(*x)] *x = (*x)[:0] for i := range data { if data[i].isNull == 1 { *x = append(*x, nil) continue } - b := C.dpiData_getBytes(&data[i]) - *x = append(*x, ((*[32767]byte)(unsafe.Pointer(b.ptr)))[:b.length:b.length]) + db := C.dpiData_getBytes(&data[i]) + b := ((*[32767]byte)(unsafe.Pointer(db.ptr)))[:db.length:db.length] + // b must be copied + if i < len(maX) { + *x = append(*x, append(maX[i][:0], b...)) + } else { + *x = append(*x, append(make([]byte, 0, len(b)), b...)) + } } case *Number: @@ -1702,7 +1747,7 @@ func (c *conn) dataGetStmtC(row *driver.Rows, data *C.dpiData) error { var n C.uint32_t if C.dpiStmt_getNumQueryColumns(st.dpiStmt, &n) == C.DPI_FAILURE { *row = &rows{ - err: errors.Wrapf(io.EOF, "getNumQueryColumns: %v", c.getError()), + err: errors.Errorf("getNumQueryColumns: %w: %w", c.getError(), io.EOF), } return nil } @@ -1777,7 +1822,7 @@ func (c *conn) dataSetLOB(dv *C.dpiVar, data []C.dpiData, vv interface{}) error } var lob *C.dpiLob if C.dpiConn_newTempLob(c.dpiConn, typ, &lob) == C.DPI_FAILURE { - return errors.Wrapf(c.getError(), "newTempLob(typ=%d)", typ) + return errors.Errorf("newTempLob(typ=%d): %w", typ, c.getError()) } var chunkSize C.uint32_t _ = C.dpiLob_getChunkSize(lob, &chunkSize) @@ -1832,8 +1877,15 @@ func (c *conn) dataSetObject(dv *C.dpiVar, data []C.dpiData, vv interface{}) err switch o := vv.(type) { case Object: objs[0] = o + case *Object: + objs[0] = *o case []Object: objs = o + case []*Object: + objs = make([]Object, len(o)) + for i, x := range o { + objs[i] = *x + } case ObjectWriter: err := o.WriteObject() if err != nil { @@ -1858,10 +1910,12 @@ func (c *conn) dataSetObject(dv *C.dpiVar, data []C.dpiData, vv interface{}) err for i, obj := range objs { if obj.dpiObject == nil { data[i].isNull = 1 - return nil + continue } data[i].isNull = 0 - C.dpiVar_setFromObject(dv, C.uint32_t(i), obj.dpiObject) + if C.dpiVar_setFromObject(dv, C.uint32_t(i), obj.dpiObject) == C.DPI_FAILURE { + return errors.Errorf("setFromObject: %w", c.getError()) + } } return nil } @@ -1871,13 +1925,13 @@ func (c *conn) dataGetObject(v interface{}, data []C.dpiData) error { case *Object: d := Data{ ObjectType: out.ObjectType, - dpiData: &data[0], + dpiData: data[0], } *out = *d.GetObject() case ObjectScanner: d := Data{ ObjectType: out.ObjectRef().ObjectType, - dpiData: &data[0], + dpiData: data[0], } return out.Scan(d.GetObject()) default: @@ -1949,7 +2003,7 @@ func (st *statement) openRows(colCount int) (*rows, error) { var ti C.dpiDataTypeInfo for i := 0; i < colCount; i++ { if C.dpiStmt_getQueryInfo(st.dpiStmt, C.uint32_t(i+1), &info) == C.DPI_FAILURE { - return nil, errors.Wrapf(st.getError(), "getQueryInfo[%d]", i) + return nil, errors.Errorf("getQueryInfo[%d]: %w", i, st.getError()) } ti = info.typeInfo bufSize := int(ti.clientSizeInBytes) @@ -2002,11 +2056,11 @@ func (st *statement) openRows(colCount int) (*rows, error) { } if C.dpiStmt_define(st.dpiStmt, C.uint32_t(i+1), r.vars[i]) == C.DPI_FAILURE { - return nil, errors.Wrapf(st.getError(), "define[%d]", i) + return nil, errors.Errorf("define[%d]: %w", i, st.getError()) } } if C.dpiStmt_addRef(st.dpiStmt) == C.DPI_FAILURE { - return &r, errors.Wrap(st.getError(), "dpiStmt_addRef") + return &r, errors.Errorf("dpiStmt_addRef: %w", st.getError()) } st.columns = r.columns return &r, nil @@ -2023,3 +2077,41 @@ type Column struct { Scale C.int8_t Nullable bool } + +func dpiSetFromString(dv *C.dpiVar, pos C.uint32_t, x string) { + C.godror_setFromString(dv, pos, x) +} + +var stringBuilders = stringBuilderPool{ + p: &sync.Pool{New: func() interface{} { return &strings.Builder{} }}, +} + +type stringBuilderPool struct { + p *sync.Pool +} + +func (sb stringBuilderPool) Get() *strings.Builder { + return sb.p.Get().(*strings.Builder) +} +func (sb *stringBuilderPool) Put(b *strings.Builder) { + b.Reset() + sb.p.Put(b) +} + +/* +// ResetSession is called while a connection is in the connection +// pool. No queries will run on this connection until this method returns. +// +// If the connection is bad this should return driver.ErrBadConn to prevent +// the connection from being returned to the connection pool. Any other +// error will be discarded. +func (c *conn) ResetSession(ctx context.Context) error { + if Log != nil { + Log("msg", "ResetSession", "conn", c.dpiConn) + } + //subCtx, cancel := context.WithTimeout(ctx, 30*time.Second) + //err := c.Ping(subCtx) + //cancel() + return c.Ping(ctx) +} +*/ diff --git a/vendor/gopkg.in/goracle.v2/subscr.c b/vendor/github.com/godror/godror/subscr.c similarity index 100% rename from vendor/gopkg.in/goracle.v2/subscr.c rename to vendor/github.com/godror/godror/subscr.c diff --git a/vendor/gopkg.in/goracle.v2/subscr.go b/vendor/github.com/godror/godror/subscr.go similarity index 83% rename from vendor/gopkg.in/goracle.v2/subscr.go rename to vendor/github.com/godror/godror/subscr.go index 6286ec10c840..872f7ee6969f 100644 --- a/vendor/gopkg.in/goracle.v2/subscr.go +++ b/vendor/github.com/godror/godror/subscr.go @@ -1,19 +1,9 @@ // Copyright 2017 Tamás Gulácsi // // -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -package goracle +package godror /* #include @@ -28,9 +18,17 @@ import "C" import ( "log" "strings" + "sync" "unsafe" - "github.com/pkg/errors" + errors "golang.org/x/xerrors" +) + +// Cannot pass *Subscription to C, so pass an uint64 that points to this map entry +var ( + subscriptionsMu sync.Mutex + subscriptions = make(map[uint64]*Subscription) + subscriptionsID uint64 ) // CallbackSubscr is the callback for C code on subscription event. @@ -40,7 +38,9 @@ func CallbackSubscr(ctx unsafe.Pointer, message *C.dpiSubscrMessage) { if ctx == nil { return } - subscr := (*Subscription)(ctx) + subscriptionsMu.Lock() + subscr := subscriptions[*((*uint64)(ctx))] + subscriptionsMu.Unlock() getRows := func(rws *C.dpiSubscrMessageRow, rwsNum C.uint32_t) []RowEvent { if rwsNum == 0 { @@ -134,6 +134,7 @@ type Subscription struct { conn *conn dpiSubscr *C.dpiSubscr callback func(Event) + ID uint64 } func (s *Subscription) getError() error { return s.conn.getError() } @@ -150,7 +151,7 @@ func (c *conn) NewSubscription(name string, cb func(Event)) (*Subscription, erro subscr := Subscription{conn: c, callback: cb} params := (*C.dpiSubscrCreateParams)(C.malloc(C.sizeof_dpiSubscrCreateParams)) //defer func() { C.free(unsafe.Pointer(params)) }() - C.dpiContext_initSubscrCreateParams(c.dpiContext, params) + C.dpiContext_initSubscrCreateParams(c.drv.dpiContext, params) params.subscrNamespace = C.DPI_SUBSCR_NAMESPACE_DBCHANGE params.protocol = C.DPI_SUBSCR_PROTO_CALLBACK params.qos = C.DPI_SUBSCR_QOS_BEST_EFFORT | C.DPI_SUBSCR_QOS_QUERY | C.DPI_SUBSCR_QOS_ROWIDS @@ -161,7 +162,15 @@ func (c *conn) NewSubscription(name string, cb func(Event)) (*Subscription, erro } // typedef void (*dpiSubscrCallback)(void* context, dpiSubscrMessage *message); params.callback = C.dpiSubscrCallback(C.CallbackSubscrDebug) - params.callbackContext = unsafe.Pointer(&subscr) + // cannot pass &subscr to C, so pass indirectly + subscriptionsMu.Lock() + subscriptionsID++ + subscr.ID = subscriptionsID + subscriptions[subscr.ID] = &subscr + subscriptionsMu.Unlock() + subscrID := (*C.uint64_t)(C.malloc(8)) + *subscrID = C.uint64_t(subscriptionsID) + params.callbackContext = unsafe.Pointer(subscrID) dpiSubscr := (*C.dpiSubscr)(C.malloc(C.sizeof_void)) @@ -171,9 +180,9 @@ func (c *conn) NewSubscription(name string, cb func(Event)) (*Subscription, erro ) == C.DPI_FAILURE { C.free(unsafe.Pointer(params)) C.free(unsafe.Pointer(dpiSubscr)) - err := errors.Wrap(c.getError(), "newSubscription") - if strings.Contains(errors.Cause(err).Error(), "DPI-1065:") { - err = errors.WithMessage(err, "specify \"enableEvents=1\" connection parameter on connection to be able to use subscriptions") + err := errors.Errorf("newSubscription: %w", c.getError()) + if strings.Contains(errors.Unwrap(err).Error(), "DPI-1065:") { + err = errors.Errorf("specify \"enableEvents=1\" connection parameter on connection to be able to use subscriptions: %w", err) } return nil, err } @@ -190,18 +199,18 @@ func (s *Subscription) Register(qry string, params ...interface{}) error { var dpiStmt *C.dpiStmt if C.dpiSubscr_prepareStmt(s.dpiSubscr, cQry, C.uint32_t(len(qry)), &dpiStmt) == C.DPI_FAILURE { - return errors.Wrapf(s.getError(), "prepareStmt[%p]", s.dpiSubscr) + return errors.Errorf("prepareStmt[%p]: %w", s.dpiSubscr, s.getError()) } defer func() { C.dpiStmt_release(dpiStmt) }() mode := C.dpiExecMode(C.DPI_MODE_EXEC_DEFAULT) var qCols C.uint32_t if C.dpiStmt_execute(dpiStmt, mode, &qCols) == C.DPI_FAILURE { - return errors.Wrap(s.getError(), "executeStmt") + return errors.Errorf("executeStmt: %w", s.getError()) } var queryID C.uint64_t if C.dpiStmt_getSubscrQueryId(dpiStmt, &queryID) == C.DPI_FAILURE { - return errors.Wrap(s.getError(), "getSubscrQueryId") + return errors.Errorf("getSubscrQueryId: %w", s.getError()) } if Log != nil { Log("msg", "subscribed", "query", qry, "id", queryID) @@ -214,6 +223,9 @@ func (s *Subscription) Register(qry string, params ...interface{}) error { // // This code is EXPERIMENTAL yet! func (s *Subscription) Close() error { + subscriptionsMu.Lock() + delete(subscriptions, s.ID) + subscriptionsMu.Unlock() dpiSubscr := s.dpiSubscr conn := s.conn s.conn = nil @@ -223,7 +235,7 @@ func (s *Subscription) Close() error { return nil } if C.dpiConn_unsubscribe(conn.dpiConn, dpiSubscr) == C.DPI_FAILURE { - return errors.Wrap(s.getError(), "close") + return errors.Errorf("close: %w", s.getError()) } return nil } @@ -236,10 +248,10 @@ const ( EvtStartup = EventType(C.DPI_EVENT_STARTUP) EvtShutdown = EventType(C.DPI_EVENT_SHUTDOWN) EvtShutdownAny = EventType(C.DPI_EVENT_SHUTDOWN_ANY) - EvtDropDB = EventType(C.DPI_EVENT_DROP_DB) EvtDereg = EventType(C.DPI_EVENT_DEREG) EvtObjChange = EventType(C.DPI_EVENT_OBJCHANGE) EvtQueryChange = EventType(C.DPI_EVENT_QUERYCHANGE) + EvtAQ = EventType(C.DPI_EVENT_AQ) ) // Operation in the DB. diff --git a/vendor/gopkg.in/goracle.v2/version.go b/vendor/github.com/godror/godror/version.go similarity index 57% rename from vendor/gopkg.in/goracle.v2/version.go rename to vendor/github.com/godror/godror/version.go index 111b3462741c..66059da11416 100644 --- a/vendor/gopkg.in/goracle.v2/version.go +++ b/vendor/github.com/godror/godror/version.go @@ -1,6 +1,10 @@ -package goracle +// Copyright 2020 Tamás Gulácsi. +// +// SPDX-License-Identifier: UPL-1.0 OR Apache-2.0 -//go:generate bash -c "echo 3.1.4>odpi-version; set -x; curl -L https://github.com/oracle/odpi/archive/v$(cat odpi-version).tar.gz | tar xzvf - odpi-$(cat odpi-version)/{embed,include,src,CONTRIBUTING.md,LICENSE.md,README.md} && rm -rf odpi && mv odpi-$(cat odpi-version) odpi; rm -f odpi-version" +package godror + +//go:generate bash -c "echo 3.3.0>odpi-version; set -x; curl -L https://github.com/oracle/odpi/archive/v$(cat odpi-version).tar.gz | tar xzvf - odpi-$(cat odpi-version)/{embed,include,src,CONTRIBUTING.md,LICENSE.md,README.md} && rm -rf odpi && mv odpi-$(cat odpi-version) odpi; rm -f odpi-version" // Version of this driver -const Version = "v2.15.3" +const Version = "v0.10.4" diff --git a/vendor/github.com/gorilla/websocket/AUTHORS b/vendor/github.com/gorilla/websocket/AUTHORS new file mode 100644 index 000000000000..1931f400682c --- /dev/null +++ b/vendor/github.com/gorilla/websocket/AUTHORS @@ -0,0 +1,9 @@ +# This is the official list of Gorilla WebSocket authors for copyright +# purposes. +# +# Please keep the list sorted. + +Gary Burd +Google LLC (https://opensource.google.com/) +Joachim Bauch + diff --git a/vendor/github.com/gorilla/websocket/LICENSE b/vendor/github.com/gorilla/websocket/LICENSE new file mode 100644 index 000000000000..9171c9722522 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/LICENSE @@ -0,0 +1,22 @@ +Copyright (c) 2013 The Gorilla WebSocket Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + + Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/gorilla/websocket/README.md b/vendor/github.com/gorilla/websocket/README.md new file mode 100644 index 000000000000..0827d059c11a --- /dev/null +++ b/vendor/github.com/gorilla/websocket/README.md @@ -0,0 +1,64 @@ +# Gorilla WebSocket + +[![GoDoc](https://godoc.org/github.com/gorilla/websocket?status.svg)](https://godoc.org/github.com/gorilla/websocket) +[![CircleCI](https://circleci.com/gh/gorilla/websocket.svg?style=svg)](https://circleci.com/gh/gorilla/websocket) + +Gorilla WebSocket is a [Go](http://golang.org/) implementation of the +[WebSocket](http://www.rfc-editor.org/rfc/rfc6455.txt) protocol. + +### Documentation + +* [API Reference](http://godoc.org/github.com/gorilla/websocket) +* [Chat example](https://github.com/gorilla/websocket/tree/master/examples/chat) +* [Command example](https://github.com/gorilla/websocket/tree/master/examples/command) +* [Client and server example](https://github.com/gorilla/websocket/tree/master/examples/echo) +* [File watch example](https://github.com/gorilla/websocket/tree/master/examples/filewatch) + +### Status + +The Gorilla WebSocket package provides a complete and tested implementation of +the [WebSocket](http://www.rfc-editor.org/rfc/rfc6455.txt) protocol. The +package API is stable. + +### Installation + + go get github.com/gorilla/websocket + +### Protocol Compliance + +The Gorilla WebSocket package passes the server tests in the [Autobahn Test +Suite](https://github.com/crossbario/autobahn-testsuite) using the application in the [examples/autobahn +subdirectory](https://github.com/gorilla/websocket/tree/master/examples/autobahn). + +### Gorilla WebSocket compared with other packages + + + + + + + + + + + + + + + + + + +
github.com/gorillagolang.org/x/net
RFC 6455 Features
Passes Autobahn Test SuiteYesNo
Receive fragmented messageYesNo, see note 1
Send close messageYesNo
Send pings and receive pongsYesNo
Get the type of a received data messageYesYes, see note 2
Other Features
Compression ExtensionsExperimentalNo
Read message using io.ReaderYesNo, see note 3
Write message using io.WriteCloserYesNo, see note 3
+ +Notes: + +1. Large messages are fragmented in [Chrome's new WebSocket implementation](http://www.ietf.org/mail-archive/web/hybi/current/msg10503.html). +2. The application can get the type of a received data message by implementing + a [Codec marshal](http://godoc.org/golang.org/x/net/websocket#Codec.Marshal) + function. +3. The go.net io.Reader and io.Writer operate across WebSocket frame boundaries. + Read returns when the input buffer is full or a frame boundary is + encountered. Each call to Write sends a single frame message. The Gorilla + io.Reader and io.WriteCloser operate on a single WebSocket message. + diff --git a/vendor/github.com/gorilla/websocket/client.go b/vendor/github.com/gorilla/websocket/client.go new file mode 100644 index 000000000000..962c06a391c2 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/client.go @@ -0,0 +1,395 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bytes" + "context" + "crypto/tls" + "errors" + "io" + "io/ioutil" + "net" + "net/http" + "net/http/httptrace" + "net/url" + "strings" + "time" +) + +// ErrBadHandshake is returned when the server response to opening handshake is +// invalid. +var ErrBadHandshake = errors.New("websocket: bad handshake") + +var errInvalidCompression = errors.New("websocket: invalid compression negotiation") + +// NewClient creates a new client connection using the given net connection. +// The URL u specifies the host and request URI. Use requestHeader to specify +// the origin (Origin), subprotocols (Sec-WebSocket-Protocol) and cookies +// (Cookie). Use the response.Header to get the selected subprotocol +// (Sec-WebSocket-Protocol) and cookies (Set-Cookie). +// +// If the WebSocket handshake fails, ErrBadHandshake is returned along with a +// non-nil *http.Response so that callers can handle redirects, authentication, +// etc. +// +// Deprecated: Use Dialer instead. +func NewClient(netConn net.Conn, u *url.URL, requestHeader http.Header, readBufSize, writeBufSize int) (c *Conn, response *http.Response, err error) { + d := Dialer{ + ReadBufferSize: readBufSize, + WriteBufferSize: writeBufSize, + NetDial: func(net, addr string) (net.Conn, error) { + return netConn, nil + }, + } + return d.Dial(u.String(), requestHeader) +} + +// A Dialer contains options for connecting to WebSocket server. +type Dialer struct { + // NetDial specifies the dial function for creating TCP connections. If + // NetDial is nil, net.Dial is used. + NetDial func(network, addr string) (net.Conn, error) + + // NetDialContext specifies the dial function for creating TCP connections. If + // NetDialContext is nil, net.DialContext is used. + NetDialContext func(ctx context.Context, network, addr string) (net.Conn, error) + + // Proxy specifies a function to return a proxy for a given + // Request. If the function returns a non-nil error, the + // request is aborted with the provided error. + // If Proxy is nil or returns a nil *URL, no proxy is used. + Proxy func(*http.Request) (*url.URL, error) + + // TLSClientConfig specifies the TLS configuration to use with tls.Client. + // If nil, the default configuration is used. + TLSClientConfig *tls.Config + + // HandshakeTimeout specifies the duration for the handshake to complete. + HandshakeTimeout time.Duration + + // ReadBufferSize and WriteBufferSize specify I/O buffer sizes in bytes. If a buffer + // size is zero, then a useful default size is used. The I/O buffer sizes + // do not limit the size of the messages that can be sent or received. + ReadBufferSize, WriteBufferSize int + + // WriteBufferPool is a pool of buffers for write operations. If the value + // is not set, then write buffers are allocated to the connection for the + // lifetime of the connection. + // + // A pool is most useful when the application has a modest volume of writes + // across a large number of connections. + // + // Applications should use a single pool for each unique value of + // WriteBufferSize. + WriteBufferPool BufferPool + + // Subprotocols specifies the client's requested subprotocols. + Subprotocols []string + + // EnableCompression specifies if the client should attempt to negotiate + // per message compression (RFC 7692). Setting this value to true does not + // guarantee that compression will be supported. Currently only "no context + // takeover" modes are supported. + EnableCompression bool + + // Jar specifies the cookie jar. + // If Jar is nil, cookies are not sent in requests and ignored + // in responses. + Jar http.CookieJar +} + +// Dial creates a new client connection by calling DialContext with a background context. +func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) { + return d.DialContext(context.Background(), urlStr, requestHeader) +} + +var errMalformedURL = errors.New("malformed ws or wss URL") + +func hostPortNoPort(u *url.URL) (hostPort, hostNoPort string) { + hostPort = u.Host + hostNoPort = u.Host + if i := strings.LastIndex(u.Host, ":"); i > strings.LastIndex(u.Host, "]") { + hostNoPort = hostNoPort[:i] + } else { + switch u.Scheme { + case "wss": + hostPort += ":443" + case "https": + hostPort += ":443" + default: + hostPort += ":80" + } + } + return hostPort, hostNoPort +} + +// DefaultDialer is a dialer with all fields set to the default values. +var DefaultDialer = &Dialer{ + Proxy: http.ProxyFromEnvironment, + HandshakeTimeout: 45 * time.Second, +} + +// nilDialer is dialer to use when receiver is nil. +var nilDialer = *DefaultDialer + +// DialContext creates a new client connection. Use requestHeader to specify the +// origin (Origin), subprotocols (Sec-WebSocket-Protocol) and cookies (Cookie). +// Use the response.Header to get the selected subprotocol +// (Sec-WebSocket-Protocol) and cookies (Set-Cookie). +// +// The context will be used in the request and in the Dialer. +// +// If the WebSocket handshake fails, ErrBadHandshake is returned along with a +// non-nil *http.Response so that callers can handle redirects, authentication, +// etcetera. The response body may not contain the entire response and does not +// need to be closed by the application. +func (d *Dialer) DialContext(ctx context.Context, urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) { + if d == nil { + d = &nilDialer + } + + challengeKey, err := generateChallengeKey() + if err != nil { + return nil, nil, err + } + + u, err := url.Parse(urlStr) + if err != nil { + return nil, nil, err + } + + switch u.Scheme { + case "ws": + u.Scheme = "http" + case "wss": + u.Scheme = "https" + default: + return nil, nil, errMalformedURL + } + + if u.User != nil { + // User name and password are not allowed in websocket URIs. + return nil, nil, errMalformedURL + } + + req := &http.Request{ + Method: "GET", + URL: u, + Proto: "HTTP/1.1", + ProtoMajor: 1, + ProtoMinor: 1, + Header: make(http.Header), + Host: u.Host, + } + req = req.WithContext(ctx) + + // Set the cookies present in the cookie jar of the dialer + if d.Jar != nil { + for _, cookie := range d.Jar.Cookies(u) { + req.AddCookie(cookie) + } + } + + // Set the request headers using the capitalization for names and values in + // RFC examples. Although the capitalization shouldn't matter, there are + // servers that depend on it. The Header.Set method is not used because the + // method canonicalizes the header names. + req.Header["Upgrade"] = []string{"websocket"} + req.Header["Connection"] = []string{"Upgrade"} + req.Header["Sec-WebSocket-Key"] = []string{challengeKey} + req.Header["Sec-WebSocket-Version"] = []string{"13"} + if len(d.Subprotocols) > 0 { + req.Header["Sec-WebSocket-Protocol"] = []string{strings.Join(d.Subprotocols, ", ")} + } + for k, vs := range requestHeader { + switch { + case k == "Host": + if len(vs) > 0 { + req.Host = vs[0] + } + case k == "Upgrade" || + k == "Connection" || + k == "Sec-Websocket-Key" || + k == "Sec-Websocket-Version" || + k == "Sec-Websocket-Extensions" || + (k == "Sec-Websocket-Protocol" && len(d.Subprotocols) > 0): + return nil, nil, errors.New("websocket: duplicate header not allowed: " + k) + case k == "Sec-Websocket-Protocol": + req.Header["Sec-WebSocket-Protocol"] = vs + default: + req.Header[k] = vs + } + } + + if d.EnableCompression { + req.Header["Sec-WebSocket-Extensions"] = []string{"permessage-deflate; server_no_context_takeover; client_no_context_takeover"} + } + + if d.HandshakeTimeout != 0 { + var cancel func() + ctx, cancel = context.WithTimeout(ctx, d.HandshakeTimeout) + defer cancel() + } + + // Get network dial function. + var netDial func(network, add string) (net.Conn, error) + + if d.NetDialContext != nil { + netDial = func(network, addr string) (net.Conn, error) { + return d.NetDialContext(ctx, network, addr) + } + } else if d.NetDial != nil { + netDial = d.NetDial + } else { + netDialer := &net.Dialer{} + netDial = func(network, addr string) (net.Conn, error) { + return netDialer.DialContext(ctx, network, addr) + } + } + + // If needed, wrap the dial function to set the connection deadline. + if deadline, ok := ctx.Deadline(); ok { + forwardDial := netDial + netDial = func(network, addr string) (net.Conn, error) { + c, err := forwardDial(network, addr) + if err != nil { + return nil, err + } + err = c.SetDeadline(deadline) + if err != nil { + c.Close() + return nil, err + } + return c, nil + } + } + + // If needed, wrap the dial function to connect through a proxy. + if d.Proxy != nil { + proxyURL, err := d.Proxy(req) + if err != nil { + return nil, nil, err + } + if proxyURL != nil { + dialer, err := proxy_FromURL(proxyURL, netDialerFunc(netDial)) + if err != nil { + return nil, nil, err + } + netDial = dialer.Dial + } + } + + hostPort, hostNoPort := hostPortNoPort(u) + trace := httptrace.ContextClientTrace(ctx) + if trace != nil && trace.GetConn != nil { + trace.GetConn(hostPort) + } + + netConn, err := netDial("tcp", hostPort) + if trace != nil && trace.GotConn != nil { + trace.GotConn(httptrace.GotConnInfo{ + Conn: netConn, + }) + } + if err != nil { + return nil, nil, err + } + + defer func() { + if netConn != nil { + netConn.Close() + } + }() + + if u.Scheme == "https" { + cfg := cloneTLSConfig(d.TLSClientConfig) + if cfg.ServerName == "" { + cfg.ServerName = hostNoPort + } + tlsConn := tls.Client(netConn, cfg) + netConn = tlsConn + + var err error + if trace != nil { + err = doHandshakeWithTrace(trace, tlsConn, cfg) + } else { + err = doHandshake(tlsConn, cfg) + } + + if err != nil { + return nil, nil, err + } + } + + conn := newConn(netConn, false, d.ReadBufferSize, d.WriteBufferSize, d.WriteBufferPool, nil, nil) + + if err := req.Write(netConn); err != nil { + return nil, nil, err + } + + if trace != nil && trace.GotFirstResponseByte != nil { + if peek, err := conn.br.Peek(1); err == nil && len(peek) == 1 { + trace.GotFirstResponseByte() + } + } + + resp, err := http.ReadResponse(conn.br, req) + if err != nil { + return nil, nil, err + } + + if d.Jar != nil { + if rc := resp.Cookies(); len(rc) > 0 { + d.Jar.SetCookies(u, rc) + } + } + + if resp.StatusCode != 101 || + !strings.EqualFold(resp.Header.Get("Upgrade"), "websocket") || + !strings.EqualFold(resp.Header.Get("Connection"), "upgrade") || + resp.Header.Get("Sec-Websocket-Accept") != computeAcceptKey(challengeKey) { + // Before closing the network connection on return from this + // function, slurp up some of the response to aid application + // debugging. + buf := make([]byte, 1024) + n, _ := io.ReadFull(resp.Body, buf) + resp.Body = ioutil.NopCloser(bytes.NewReader(buf[:n])) + return nil, resp, ErrBadHandshake + } + + for _, ext := range parseExtensions(resp.Header) { + if ext[""] != "permessage-deflate" { + continue + } + _, snct := ext["server_no_context_takeover"] + _, cnct := ext["client_no_context_takeover"] + if !snct || !cnct { + return nil, resp, errInvalidCompression + } + conn.newCompressionWriter = compressNoContextTakeover + conn.newDecompressionReader = decompressNoContextTakeover + break + } + + resp.Body = ioutil.NopCloser(bytes.NewReader([]byte{})) + conn.subprotocol = resp.Header.Get("Sec-Websocket-Protocol") + + netConn.SetDeadline(time.Time{}) + netConn = nil // to avoid close in defer. + return conn, resp, nil +} + +func doHandshake(tlsConn *tls.Conn, cfg *tls.Config) error { + if err := tlsConn.Handshake(); err != nil { + return err + } + if !cfg.InsecureSkipVerify { + if err := tlsConn.VerifyHostname(cfg.ServerName); err != nil { + return err + } + } + return nil +} diff --git a/vendor/github.com/gorilla/websocket/client_clone.go b/vendor/github.com/gorilla/websocket/client_clone.go new file mode 100644 index 000000000000..4f0d943723a9 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/client_clone.go @@ -0,0 +1,16 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.8 + +package websocket + +import "crypto/tls" + +func cloneTLSConfig(cfg *tls.Config) *tls.Config { + if cfg == nil { + return &tls.Config{} + } + return cfg.Clone() +} diff --git a/vendor/github.com/gorilla/websocket/client_clone_legacy.go b/vendor/github.com/gorilla/websocket/client_clone_legacy.go new file mode 100644 index 000000000000..babb007fb414 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/client_clone_legacy.go @@ -0,0 +1,38 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.8 + +package websocket + +import "crypto/tls" + +// cloneTLSConfig clones all public fields except the fields +// SessionTicketsDisabled and SessionTicketKey. This avoids copying the +// sync.Mutex in the sync.Once and makes it safe to call cloneTLSConfig on a +// config in active use. +func cloneTLSConfig(cfg *tls.Config) *tls.Config { + if cfg == nil { + return &tls.Config{} + } + return &tls.Config{ + Rand: cfg.Rand, + Time: cfg.Time, + Certificates: cfg.Certificates, + NameToCertificate: cfg.NameToCertificate, + GetCertificate: cfg.GetCertificate, + RootCAs: cfg.RootCAs, + NextProtos: cfg.NextProtos, + ServerName: cfg.ServerName, + ClientAuth: cfg.ClientAuth, + ClientCAs: cfg.ClientCAs, + InsecureSkipVerify: cfg.InsecureSkipVerify, + CipherSuites: cfg.CipherSuites, + PreferServerCipherSuites: cfg.PreferServerCipherSuites, + ClientSessionCache: cfg.ClientSessionCache, + MinVersion: cfg.MinVersion, + MaxVersion: cfg.MaxVersion, + CurvePreferences: cfg.CurvePreferences, + } +} diff --git a/vendor/github.com/gorilla/websocket/compression.go b/vendor/github.com/gorilla/websocket/compression.go new file mode 100644 index 000000000000..813ffb1e8433 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/compression.go @@ -0,0 +1,148 @@ +// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "compress/flate" + "errors" + "io" + "strings" + "sync" +) + +const ( + minCompressionLevel = -2 // flate.HuffmanOnly not defined in Go < 1.6 + maxCompressionLevel = flate.BestCompression + defaultCompressionLevel = 1 +) + +var ( + flateWriterPools [maxCompressionLevel - minCompressionLevel + 1]sync.Pool + flateReaderPool = sync.Pool{New: func() interface{} { + return flate.NewReader(nil) + }} +) + +func decompressNoContextTakeover(r io.Reader) io.ReadCloser { + const tail = + // Add four bytes as specified in RFC + "\x00\x00\xff\xff" + + // Add final block to squelch unexpected EOF error from flate reader. + "\x01\x00\x00\xff\xff" + + fr, _ := flateReaderPool.Get().(io.ReadCloser) + fr.(flate.Resetter).Reset(io.MultiReader(r, strings.NewReader(tail)), nil) + return &flateReadWrapper{fr} +} + +func isValidCompressionLevel(level int) bool { + return minCompressionLevel <= level && level <= maxCompressionLevel +} + +func compressNoContextTakeover(w io.WriteCloser, level int) io.WriteCloser { + p := &flateWriterPools[level-minCompressionLevel] + tw := &truncWriter{w: w} + fw, _ := p.Get().(*flate.Writer) + if fw == nil { + fw, _ = flate.NewWriter(tw, level) + } else { + fw.Reset(tw) + } + return &flateWriteWrapper{fw: fw, tw: tw, p: p} +} + +// truncWriter is an io.Writer that writes all but the last four bytes of the +// stream to another io.Writer. +type truncWriter struct { + w io.WriteCloser + n int + p [4]byte +} + +func (w *truncWriter) Write(p []byte) (int, error) { + n := 0 + + // fill buffer first for simplicity. + if w.n < len(w.p) { + n = copy(w.p[w.n:], p) + p = p[n:] + w.n += n + if len(p) == 0 { + return n, nil + } + } + + m := len(p) + if m > len(w.p) { + m = len(w.p) + } + + if nn, err := w.w.Write(w.p[:m]); err != nil { + return n + nn, err + } + + copy(w.p[:], w.p[m:]) + copy(w.p[len(w.p)-m:], p[len(p)-m:]) + nn, err := w.w.Write(p[:len(p)-m]) + return n + nn, err +} + +type flateWriteWrapper struct { + fw *flate.Writer + tw *truncWriter + p *sync.Pool +} + +func (w *flateWriteWrapper) Write(p []byte) (int, error) { + if w.fw == nil { + return 0, errWriteClosed + } + return w.fw.Write(p) +} + +func (w *flateWriteWrapper) Close() error { + if w.fw == nil { + return errWriteClosed + } + err1 := w.fw.Flush() + w.p.Put(w.fw) + w.fw = nil + if w.tw.p != [4]byte{0, 0, 0xff, 0xff} { + return errors.New("websocket: internal error, unexpected bytes at end of flate stream") + } + err2 := w.tw.w.Close() + if err1 != nil { + return err1 + } + return err2 +} + +type flateReadWrapper struct { + fr io.ReadCloser +} + +func (r *flateReadWrapper) Read(p []byte) (int, error) { + if r.fr == nil { + return 0, io.ErrClosedPipe + } + n, err := r.fr.Read(p) + if err == io.EOF { + // Preemptively place the reader back in the pool. This helps with + // scenarios where the application does not call NextReader() soon after + // this final read. + r.Close() + } + return n, err +} + +func (r *flateReadWrapper) Close() error { + if r.fr == nil { + return io.ErrClosedPipe + } + err := r.fr.Close() + flateReaderPool.Put(r.fr) + r.fr = nil + return err +} diff --git a/vendor/github.com/gorilla/websocket/conn.go b/vendor/github.com/gorilla/websocket/conn.go new file mode 100644 index 000000000000..6f17cd299827 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/conn.go @@ -0,0 +1,1201 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bufio" + "encoding/binary" + "errors" + "io" + "io/ioutil" + "math/rand" + "net" + "strconv" + "sync" + "time" + "unicode/utf8" +) + +const ( + // Frame header byte 0 bits from Section 5.2 of RFC 6455 + finalBit = 1 << 7 + rsv1Bit = 1 << 6 + rsv2Bit = 1 << 5 + rsv3Bit = 1 << 4 + + // Frame header byte 1 bits from Section 5.2 of RFC 6455 + maskBit = 1 << 7 + + maxFrameHeaderSize = 2 + 8 + 4 // Fixed header + length + mask + maxControlFramePayloadSize = 125 + + writeWait = time.Second + + defaultReadBufferSize = 4096 + defaultWriteBufferSize = 4096 + + continuationFrame = 0 + noFrame = -1 +) + +// Close codes defined in RFC 6455, section 11.7. +const ( + CloseNormalClosure = 1000 + CloseGoingAway = 1001 + CloseProtocolError = 1002 + CloseUnsupportedData = 1003 + CloseNoStatusReceived = 1005 + CloseAbnormalClosure = 1006 + CloseInvalidFramePayloadData = 1007 + ClosePolicyViolation = 1008 + CloseMessageTooBig = 1009 + CloseMandatoryExtension = 1010 + CloseInternalServerErr = 1011 + CloseServiceRestart = 1012 + CloseTryAgainLater = 1013 + CloseTLSHandshake = 1015 +) + +// The message types are defined in RFC 6455, section 11.8. +const ( + // TextMessage denotes a text data message. The text message payload is + // interpreted as UTF-8 encoded text data. + TextMessage = 1 + + // BinaryMessage denotes a binary data message. + BinaryMessage = 2 + + // CloseMessage denotes a close control message. The optional message + // payload contains a numeric code and text. Use the FormatCloseMessage + // function to format a close message payload. + CloseMessage = 8 + + // PingMessage denotes a ping control message. The optional message payload + // is UTF-8 encoded text. + PingMessage = 9 + + // PongMessage denotes a pong control message. The optional message payload + // is UTF-8 encoded text. + PongMessage = 10 +) + +// ErrCloseSent is returned when the application writes a message to the +// connection after sending a close message. +var ErrCloseSent = errors.New("websocket: close sent") + +// ErrReadLimit is returned when reading a message that is larger than the +// read limit set for the connection. +var ErrReadLimit = errors.New("websocket: read limit exceeded") + +// netError satisfies the net Error interface. +type netError struct { + msg string + temporary bool + timeout bool +} + +func (e *netError) Error() string { return e.msg } +func (e *netError) Temporary() bool { return e.temporary } +func (e *netError) Timeout() bool { return e.timeout } + +// CloseError represents a close message. +type CloseError struct { + // Code is defined in RFC 6455, section 11.7. + Code int + + // Text is the optional text payload. + Text string +} + +func (e *CloseError) Error() string { + s := []byte("websocket: close ") + s = strconv.AppendInt(s, int64(e.Code), 10) + switch e.Code { + case CloseNormalClosure: + s = append(s, " (normal)"...) + case CloseGoingAway: + s = append(s, " (going away)"...) + case CloseProtocolError: + s = append(s, " (protocol error)"...) + case CloseUnsupportedData: + s = append(s, " (unsupported data)"...) + case CloseNoStatusReceived: + s = append(s, " (no status)"...) + case CloseAbnormalClosure: + s = append(s, " (abnormal closure)"...) + case CloseInvalidFramePayloadData: + s = append(s, " (invalid payload data)"...) + case ClosePolicyViolation: + s = append(s, " (policy violation)"...) + case CloseMessageTooBig: + s = append(s, " (message too big)"...) + case CloseMandatoryExtension: + s = append(s, " (mandatory extension missing)"...) + case CloseInternalServerErr: + s = append(s, " (internal server error)"...) + case CloseTLSHandshake: + s = append(s, " (TLS handshake error)"...) + } + if e.Text != "" { + s = append(s, ": "...) + s = append(s, e.Text...) + } + return string(s) +} + +// IsCloseError returns boolean indicating whether the error is a *CloseError +// with one of the specified codes. +func IsCloseError(err error, codes ...int) bool { + if e, ok := err.(*CloseError); ok { + for _, code := range codes { + if e.Code == code { + return true + } + } + } + return false +} + +// IsUnexpectedCloseError returns boolean indicating whether the error is a +// *CloseError with a code not in the list of expected codes. +func IsUnexpectedCloseError(err error, expectedCodes ...int) bool { + if e, ok := err.(*CloseError); ok { + for _, code := range expectedCodes { + if e.Code == code { + return false + } + } + return true + } + return false +} + +var ( + errWriteTimeout = &netError{msg: "websocket: write timeout", timeout: true, temporary: true} + errUnexpectedEOF = &CloseError{Code: CloseAbnormalClosure, Text: io.ErrUnexpectedEOF.Error()} + errBadWriteOpCode = errors.New("websocket: bad write message type") + errWriteClosed = errors.New("websocket: write closed") + errInvalidControlFrame = errors.New("websocket: invalid control frame") +) + +func newMaskKey() [4]byte { + n := rand.Uint32() + return [4]byte{byte(n), byte(n >> 8), byte(n >> 16), byte(n >> 24)} +} + +func hideTempErr(err error) error { + if e, ok := err.(net.Error); ok && e.Temporary() { + err = &netError{msg: e.Error(), timeout: e.Timeout()} + } + return err +} + +func isControl(frameType int) bool { + return frameType == CloseMessage || frameType == PingMessage || frameType == PongMessage +} + +func isData(frameType int) bool { + return frameType == TextMessage || frameType == BinaryMessage +} + +var validReceivedCloseCodes = map[int]bool{ + // see http://www.iana.org/assignments/websocket/websocket.xhtml#close-code-number + + CloseNormalClosure: true, + CloseGoingAway: true, + CloseProtocolError: true, + CloseUnsupportedData: true, + CloseNoStatusReceived: false, + CloseAbnormalClosure: false, + CloseInvalidFramePayloadData: true, + ClosePolicyViolation: true, + CloseMessageTooBig: true, + CloseMandatoryExtension: true, + CloseInternalServerErr: true, + CloseServiceRestart: true, + CloseTryAgainLater: true, + CloseTLSHandshake: false, +} + +func isValidReceivedCloseCode(code int) bool { + return validReceivedCloseCodes[code] || (code >= 3000 && code <= 4999) +} + +// BufferPool represents a pool of buffers. The *sync.Pool type satisfies this +// interface. The type of the value stored in a pool is not specified. +type BufferPool interface { + // Get gets a value from the pool or returns nil if the pool is empty. + Get() interface{} + // Put adds a value to the pool. + Put(interface{}) +} + +// writePoolData is the type added to the write buffer pool. This wrapper is +// used to prevent applications from peeking at and depending on the values +// added to the pool. +type writePoolData struct{ buf []byte } + +// The Conn type represents a WebSocket connection. +type Conn struct { + conn net.Conn + isServer bool + subprotocol string + + // Write fields + mu chan bool // used as mutex to protect write to conn + writeBuf []byte // frame is constructed in this buffer. + writePool BufferPool + writeBufSize int + writeDeadline time.Time + writer io.WriteCloser // the current writer returned to the application + isWriting bool // for best-effort concurrent write detection + + writeErrMu sync.Mutex + writeErr error + + enableWriteCompression bool + compressionLevel int + newCompressionWriter func(io.WriteCloser, int) io.WriteCloser + + // Read fields + reader io.ReadCloser // the current reader returned to the application + readErr error + br *bufio.Reader + // bytes remaining in current frame. + // set setReadRemaining to safely update this value and prevent overflow + readRemaining int64 + readFinal bool // true the current message has more frames. + readLength int64 // Message size. + readLimit int64 // Maximum message size. + readMaskPos int + readMaskKey [4]byte + handlePong func(string) error + handlePing func(string) error + handleClose func(int, string) error + readErrCount int + messageReader *messageReader // the current low-level reader + + readDecompress bool // whether last read frame had RSV1 set + newDecompressionReader func(io.Reader) io.ReadCloser +} + +func newConn(conn net.Conn, isServer bool, readBufferSize, writeBufferSize int, writeBufferPool BufferPool, br *bufio.Reader, writeBuf []byte) *Conn { + + if br == nil { + if readBufferSize == 0 { + readBufferSize = defaultReadBufferSize + } else if readBufferSize < maxControlFramePayloadSize { + // must be large enough for control frame + readBufferSize = maxControlFramePayloadSize + } + br = bufio.NewReaderSize(conn, readBufferSize) + } + + if writeBufferSize <= 0 { + writeBufferSize = defaultWriteBufferSize + } + writeBufferSize += maxFrameHeaderSize + + if writeBuf == nil && writeBufferPool == nil { + writeBuf = make([]byte, writeBufferSize) + } + + mu := make(chan bool, 1) + mu <- true + c := &Conn{ + isServer: isServer, + br: br, + conn: conn, + mu: mu, + readFinal: true, + writeBuf: writeBuf, + writePool: writeBufferPool, + writeBufSize: writeBufferSize, + enableWriteCompression: true, + compressionLevel: defaultCompressionLevel, + } + c.SetCloseHandler(nil) + c.SetPingHandler(nil) + c.SetPongHandler(nil) + return c +} + +// setReadRemaining tracks the number of bytes remaining on the connection. If n +// overflows, an ErrReadLimit is returned. +func (c *Conn) setReadRemaining(n int64) error { + if n < 0 { + return ErrReadLimit + } + + c.readRemaining = n + return nil +} + +// Subprotocol returns the negotiated protocol for the connection. +func (c *Conn) Subprotocol() string { + return c.subprotocol +} + +// Close closes the underlying network connection without sending or waiting +// for a close message. +func (c *Conn) Close() error { + return c.conn.Close() +} + +// LocalAddr returns the local network address. +func (c *Conn) LocalAddr() net.Addr { + return c.conn.LocalAddr() +} + +// RemoteAddr returns the remote network address. +func (c *Conn) RemoteAddr() net.Addr { + return c.conn.RemoteAddr() +} + +// Write methods + +func (c *Conn) writeFatal(err error) error { + err = hideTempErr(err) + c.writeErrMu.Lock() + if c.writeErr == nil { + c.writeErr = err + } + c.writeErrMu.Unlock() + return err +} + +func (c *Conn) read(n int) ([]byte, error) { + p, err := c.br.Peek(n) + if err == io.EOF { + err = errUnexpectedEOF + } + c.br.Discard(len(p)) + return p, err +} + +func (c *Conn) write(frameType int, deadline time.Time, buf0, buf1 []byte) error { + <-c.mu + defer func() { c.mu <- true }() + + c.writeErrMu.Lock() + err := c.writeErr + c.writeErrMu.Unlock() + if err != nil { + return err + } + + c.conn.SetWriteDeadline(deadline) + if len(buf1) == 0 { + _, err = c.conn.Write(buf0) + } else { + err = c.writeBufs(buf0, buf1) + } + if err != nil { + return c.writeFatal(err) + } + if frameType == CloseMessage { + c.writeFatal(ErrCloseSent) + } + return nil +} + +// WriteControl writes a control message with the given deadline. The allowed +// message types are CloseMessage, PingMessage and PongMessage. +func (c *Conn) WriteControl(messageType int, data []byte, deadline time.Time) error { + if !isControl(messageType) { + return errBadWriteOpCode + } + if len(data) > maxControlFramePayloadSize { + return errInvalidControlFrame + } + + b0 := byte(messageType) | finalBit + b1 := byte(len(data)) + if !c.isServer { + b1 |= maskBit + } + + buf := make([]byte, 0, maxFrameHeaderSize+maxControlFramePayloadSize) + buf = append(buf, b0, b1) + + if c.isServer { + buf = append(buf, data...) + } else { + key := newMaskKey() + buf = append(buf, key[:]...) + buf = append(buf, data...) + maskBytes(key, 0, buf[6:]) + } + + d := time.Hour * 1000 + if !deadline.IsZero() { + d = deadline.Sub(time.Now()) + if d < 0 { + return errWriteTimeout + } + } + + timer := time.NewTimer(d) + select { + case <-c.mu: + timer.Stop() + case <-timer.C: + return errWriteTimeout + } + defer func() { c.mu <- true }() + + c.writeErrMu.Lock() + err := c.writeErr + c.writeErrMu.Unlock() + if err != nil { + return err + } + + c.conn.SetWriteDeadline(deadline) + _, err = c.conn.Write(buf) + if err != nil { + return c.writeFatal(err) + } + if messageType == CloseMessage { + c.writeFatal(ErrCloseSent) + } + return err +} + +// beginMessage prepares a connection and message writer for a new message. +func (c *Conn) beginMessage(mw *messageWriter, messageType int) error { + // Close previous writer if not already closed by the application. It's + // probably better to return an error in this situation, but we cannot + // change this without breaking existing applications. + if c.writer != nil { + c.writer.Close() + c.writer = nil + } + + if !isControl(messageType) && !isData(messageType) { + return errBadWriteOpCode + } + + c.writeErrMu.Lock() + err := c.writeErr + c.writeErrMu.Unlock() + if err != nil { + return err + } + + mw.c = c + mw.frameType = messageType + mw.pos = maxFrameHeaderSize + + if c.writeBuf == nil { + wpd, ok := c.writePool.Get().(writePoolData) + if ok { + c.writeBuf = wpd.buf + } else { + c.writeBuf = make([]byte, c.writeBufSize) + } + } + return nil +} + +// NextWriter returns a writer for the next message to send. The writer's Close +// method flushes the complete message to the network. +// +// There can be at most one open writer on a connection. NextWriter closes the +// previous writer if the application has not already done so. +// +// All message types (TextMessage, BinaryMessage, CloseMessage, PingMessage and +// PongMessage) are supported. +func (c *Conn) NextWriter(messageType int) (io.WriteCloser, error) { + var mw messageWriter + if err := c.beginMessage(&mw, messageType); err != nil { + return nil, err + } + c.writer = &mw + if c.newCompressionWriter != nil && c.enableWriteCompression && isData(messageType) { + w := c.newCompressionWriter(c.writer, c.compressionLevel) + mw.compress = true + c.writer = w + } + return c.writer, nil +} + +type messageWriter struct { + c *Conn + compress bool // whether next call to flushFrame should set RSV1 + pos int // end of data in writeBuf. + frameType int // type of the current frame. + err error +} + +func (w *messageWriter) endMessage(err error) error { + if w.err != nil { + return err + } + c := w.c + w.err = err + c.writer = nil + if c.writePool != nil { + c.writePool.Put(writePoolData{buf: c.writeBuf}) + c.writeBuf = nil + } + return err +} + +// flushFrame writes buffered data and extra as a frame to the network. The +// final argument indicates that this is the last frame in the message. +func (w *messageWriter) flushFrame(final bool, extra []byte) error { + c := w.c + length := w.pos - maxFrameHeaderSize + len(extra) + + // Check for invalid control frames. + if isControl(w.frameType) && + (!final || length > maxControlFramePayloadSize) { + return w.endMessage(errInvalidControlFrame) + } + + b0 := byte(w.frameType) + if final { + b0 |= finalBit + } + if w.compress { + b0 |= rsv1Bit + } + w.compress = false + + b1 := byte(0) + if !c.isServer { + b1 |= maskBit + } + + // Assume that the frame starts at beginning of c.writeBuf. + framePos := 0 + if c.isServer { + // Adjust up if mask not included in the header. + framePos = 4 + } + + switch { + case length >= 65536: + c.writeBuf[framePos] = b0 + c.writeBuf[framePos+1] = b1 | 127 + binary.BigEndian.PutUint64(c.writeBuf[framePos+2:], uint64(length)) + case length > 125: + framePos += 6 + c.writeBuf[framePos] = b0 + c.writeBuf[framePos+1] = b1 | 126 + binary.BigEndian.PutUint16(c.writeBuf[framePos+2:], uint16(length)) + default: + framePos += 8 + c.writeBuf[framePos] = b0 + c.writeBuf[framePos+1] = b1 | byte(length) + } + + if !c.isServer { + key := newMaskKey() + copy(c.writeBuf[maxFrameHeaderSize-4:], key[:]) + maskBytes(key, 0, c.writeBuf[maxFrameHeaderSize:w.pos]) + if len(extra) > 0 { + return w.endMessage(c.writeFatal(errors.New("websocket: internal error, extra used in client mode"))) + } + } + + // Write the buffers to the connection with best-effort detection of + // concurrent writes. See the concurrency section in the package + // documentation for more info. + + if c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = true + + err := c.write(w.frameType, c.writeDeadline, c.writeBuf[framePos:w.pos], extra) + + if !c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = false + + if err != nil { + return w.endMessage(err) + } + + if final { + w.endMessage(errWriteClosed) + return nil + } + + // Setup for next frame. + w.pos = maxFrameHeaderSize + w.frameType = continuationFrame + return nil +} + +func (w *messageWriter) ncopy(max int) (int, error) { + n := len(w.c.writeBuf) - w.pos + if n <= 0 { + if err := w.flushFrame(false, nil); err != nil { + return 0, err + } + n = len(w.c.writeBuf) - w.pos + } + if n > max { + n = max + } + return n, nil +} + +func (w *messageWriter) Write(p []byte) (int, error) { + if w.err != nil { + return 0, w.err + } + + if len(p) > 2*len(w.c.writeBuf) && w.c.isServer { + // Don't buffer large messages. + err := w.flushFrame(false, p) + if err != nil { + return 0, err + } + return len(p), nil + } + + nn := len(p) + for len(p) > 0 { + n, err := w.ncopy(len(p)) + if err != nil { + return 0, err + } + copy(w.c.writeBuf[w.pos:], p[:n]) + w.pos += n + p = p[n:] + } + return nn, nil +} + +func (w *messageWriter) WriteString(p string) (int, error) { + if w.err != nil { + return 0, w.err + } + + nn := len(p) + for len(p) > 0 { + n, err := w.ncopy(len(p)) + if err != nil { + return 0, err + } + copy(w.c.writeBuf[w.pos:], p[:n]) + w.pos += n + p = p[n:] + } + return nn, nil +} + +func (w *messageWriter) ReadFrom(r io.Reader) (nn int64, err error) { + if w.err != nil { + return 0, w.err + } + for { + if w.pos == len(w.c.writeBuf) { + err = w.flushFrame(false, nil) + if err != nil { + break + } + } + var n int + n, err = r.Read(w.c.writeBuf[w.pos:]) + w.pos += n + nn += int64(n) + if err != nil { + if err == io.EOF { + err = nil + } + break + } + } + return nn, err +} + +func (w *messageWriter) Close() error { + if w.err != nil { + return w.err + } + return w.flushFrame(true, nil) +} + +// WritePreparedMessage writes prepared message into connection. +func (c *Conn) WritePreparedMessage(pm *PreparedMessage) error { + frameType, frameData, err := pm.frame(prepareKey{ + isServer: c.isServer, + compress: c.newCompressionWriter != nil && c.enableWriteCompression && isData(pm.messageType), + compressionLevel: c.compressionLevel, + }) + if err != nil { + return err + } + if c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = true + err = c.write(frameType, c.writeDeadline, frameData, nil) + if !c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = false + return err +} + +// WriteMessage is a helper method for getting a writer using NextWriter, +// writing the message and closing the writer. +func (c *Conn) WriteMessage(messageType int, data []byte) error { + + if c.isServer && (c.newCompressionWriter == nil || !c.enableWriteCompression) { + // Fast path with no allocations and single frame. + + var mw messageWriter + if err := c.beginMessage(&mw, messageType); err != nil { + return err + } + n := copy(c.writeBuf[mw.pos:], data) + mw.pos += n + data = data[n:] + return mw.flushFrame(true, data) + } + + w, err := c.NextWriter(messageType) + if err != nil { + return err + } + if _, err = w.Write(data); err != nil { + return err + } + return w.Close() +} + +// SetWriteDeadline sets the write deadline on the underlying network +// connection. After a write has timed out, the websocket state is corrupt and +// all future writes will return an error. A zero value for t means writes will +// not time out. +func (c *Conn) SetWriteDeadline(t time.Time) error { + c.writeDeadline = t + return nil +} + +// Read methods + +func (c *Conn) advanceFrame() (int, error) { + // 1. Skip remainder of previous frame. + + if c.readRemaining > 0 { + if _, err := io.CopyN(ioutil.Discard, c.br, c.readRemaining); err != nil { + return noFrame, err + } + } + + // 2. Read and parse first two bytes of frame header. + + p, err := c.read(2) + if err != nil { + return noFrame, err + } + + final := p[0]&finalBit != 0 + frameType := int(p[0] & 0xf) + mask := p[1]&maskBit != 0 + c.setReadRemaining(int64(p[1] & 0x7f)) + + c.readDecompress = false + if c.newDecompressionReader != nil && (p[0]&rsv1Bit) != 0 { + c.readDecompress = true + p[0] &^= rsv1Bit + } + + if rsv := p[0] & (rsv1Bit | rsv2Bit | rsv3Bit); rsv != 0 { + return noFrame, c.handleProtocolError("unexpected reserved bits 0x" + strconv.FormatInt(int64(rsv), 16)) + } + + switch frameType { + case CloseMessage, PingMessage, PongMessage: + if c.readRemaining > maxControlFramePayloadSize { + return noFrame, c.handleProtocolError("control frame length > 125") + } + if !final { + return noFrame, c.handleProtocolError("control frame not final") + } + case TextMessage, BinaryMessage: + if !c.readFinal { + return noFrame, c.handleProtocolError("message start before final message frame") + } + c.readFinal = final + case continuationFrame: + if c.readFinal { + return noFrame, c.handleProtocolError("continuation after final message frame") + } + c.readFinal = final + default: + return noFrame, c.handleProtocolError("unknown opcode " + strconv.Itoa(frameType)) + } + + // 3. Read and parse frame length as per + // https://tools.ietf.org/html/rfc6455#section-5.2 + // + // The length of the "Payload data", in bytes: if 0-125, that is the payload + // length. + // - If 126, the following 2 bytes interpreted as a 16-bit unsigned + // integer are the payload length. + // - If 127, the following 8 bytes interpreted as + // a 64-bit unsigned integer (the most significant bit MUST be 0) are the + // payload length. Multibyte length quantities are expressed in network byte + // order. + + switch c.readRemaining { + case 126: + p, err := c.read(2) + if err != nil { + return noFrame, err + } + + if err := c.setReadRemaining(int64(binary.BigEndian.Uint16(p))); err != nil { + return noFrame, err + } + case 127: + p, err := c.read(8) + if err != nil { + return noFrame, err + } + + if err := c.setReadRemaining(int64(binary.BigEndian.Uint64(p))); err != nil { + return noFrame, err + } + } + + // 4. Handle frame masking. + + if mask != c.isServer { + return noFrame, c.handleProtocolError("incorrect mask flag") + } + + if mask { + c.readMaskPos = 0 + p, err := c.read(len(c.readMaskKey)) + if err != nil { + return noFrame, err + } + copy(c.readMaskKey[:], p) + } + + // 5. For text and binary messages, enforce read limit and return. + + if frameType == continuationFrame || frameType == TextMessage || frameType == BinaryMessage { + + c.readLength += c.readRemaining + // Don't allow readLength to overflow in the presence of a large readRemaining + // counter. + if c.readLength < 0 { + return noFrame, ErrReadLimit + } + + if c.readLimit > 0 && c.readLength > c.readLimit { + c.WriteControl(CloseMessage, FormatCloseMessage(CloseMessageTooBig, ""), time.Now().Add(writeWait)) + return noFrame, ErrReadLimit + } + + return frameType, nil + } + + // 6. Read control frame payload. + + var payload []byte + if c.readRemaining > 0 { + payload, err = c.read(int(c.readRemaining)) + c.setReadRemaining(0) + if err != nil { + return noFrame, err + } + if c.isServer { + maskBytes(c.readMaskKey, 0, payload) + } + } + + // 7. Process control frame payload. + + switch frameType { + case PongMessage: + if err := c.handlePong(string(payload)); err != nil { + return noFrame, err + } + case PingMessage: + if err := c.handlePing(string(payload)); err != nil { + return noFrame, err + } + case CloseMessage: + closeCode := CloseNoStatusReceived + closeText := "" + if len(payload) >= 2 { + closeCode = int(binary.BigEndian.Uint16(payload)) + if !isValidReceivedCloseCode(closeCode) { + return noFrame, c.handleProtocolError("invalid close code") + } + closeText = string(payload[2:]) + if !utf8.ValidString(closeText) { + return noFrame, c.handleProtocolError("invalid utf8 payload in close frame") + } + } + if err := c.handleClose(closeCode, closeText); err != nil { + return noFrame, err + } + return noFrame, &CloseError{Code: closeCode, Text: closeText} + } + + return frameType, nil +} + +func (c *Conn) handleProtocolError(message string) error { + c.WriteControl(CloseMessage, FormatCloseMessage(CloseProtocolError, message), time.Now().Add(writeWait)) + return errors.New("websocket: " + message) +} + +// NextReader returns the next data message received from the peer. The +// returned messageType is either TextMessage or BinaryMessage. +// +// There can be at most one open reader on a connection. NextReader discards +// the previous message if the application has not already consumed it. +// +// Applications must break out of the application's read loop when this method +// returns a non-nil error value. Errors returned from this method are +// permanent. Once this method returns a non-nil error, all subsequent calls to +// this method return the same error. +func (c *Conn) NextReader() (messageType int, r io.Reader, err error) { + // Close previous reader, only relevant for decompression. + if c.reader != nil { + c.reader.Close() + c.reader = nil + } + + c.messageReader = nil + c.readLength = 0 + + for c.readErr == nil { + frameType, err := c.advanceFrame() + if err != nil { + c.readErr = hideTempErr(err) + break + } + + if frameType == TextMessage || frameType == BinaryMessage { + c.messageReader = &messageReader{c} + c.reader = c.messageReader + if c.readDecompress { + c.reader = c.newDecompressionReader(c.reader) + } + return frameType, c.reader, nil + } + } + + // Applications that do handle the error returned from this method spin in + // tight loop on connection failure. To help application developers detect + // this error, panic on repeated reads to the failed connection. + c.readErrCount++ + if c.readErrCount >= 1000 { + panic("repeated read on failed websocket connection") + } + + return noFrame, nil, c.readErr +} + +type messageReader struct{ c *Conn } + +func (r *messageReader) Read(b []byte) (int, error) { + c := r.c + if c.messageReader != r { + return 0, io.EOF + } + + for c.readErr == nil { + + if c.readRemaining > 0 { + if int64(len(b)) > c.readRemaining { + b = b[:c.readRemaining] + } + n, err := c.br.Read(b) + c.readErr = hideTempErr(err) + if c.isServer { + c.readMaskPos = maskBytes(c.readMaskKey, c.readMaskPos, b[:n]) + } + rem := c.readRemaining + rem -= int64(n) + c.setReadRemaining(rem) + if c.readRemaining > 0 && c.readErr == io.EOF { + c.readErr = errUnexpectedEOF + } + return n, c.readErr + } + + if c.readFinal { + c.messageReader = nil + return 0, io.EOF + } + + frameType, err := c.advanceFrame() + switch { + case err != nil: + c.readErr = hideTempErr(err) + case frameType == TextMessage || frameType == BinaryMessage: + c.readErr = errors.New("websocket: internal error, unexpected text or binary in Reader") + } + } + + err := c.readErr + if err == io.EOF && c.messageReader == r { + err = errUnexpectedEOF + } + return 0, err +} + +func (r *messageReader) Close() error { + return nil +} + +// ReadMessage is a helper method for getting a reader using NextReader and +// reading from that reader to a buffer. +func (c *Conn) ReadMessage() (messageType int, p []byte, err error) { + var r io.Reader + messageType, r, err = c.NextReader() + if err != nil { + return messageType, nil, err + } + p, err = ioutil.ReadAll(r) + return messageType, p, err +} + +// SetReadDeadline sets the read deadline on the underlying network connection. +// After a read has timed out, the websocket connection state is corrupt and +// all future reads will return an error. A zero value for t means reads will +// not time out. +func (c *Conn) SetReadDeadline(t time.Time) error { + return c.conn.SetReadDeadline(t) +} + +// SetReadLimit sets the maximum size in bytes for a message read from the peer. If a +// message exceeds the limit, the connection sends a close message to the peer +// and returns ErrReadLimit to the application. +func (c *Conn) SetReadLimit(limit int64) { + c.readLimit = limit +} + +// CloseHandler returns the current close handler +func (c *Conn) CloseHandler() func(code int, text string) error { + return c.handleClose +} + +// SetCloseHandler sets the handler for close messages received from the peer. +// The code argument to h is the received close code or CloseNoStatusReceived +// if the close message is empty. The default close handler sends a close +// message back to the peer. +// +// The handler function is called from the NextReader, ReadMessage and message +// reader Read methods. The application must read the connection to process +// close messages as described in the section on Control Messages above. +// +// The connection read methods return a CloseError when a close message is +// received. Most applications should handle close messages as part of their +// normal error handling. Applications should only set a close handler when the +// application must perform some action before sending a close message back to +// the peer. +func (c *Conn) SetCloseHandler(h func(code int, text string) error) { + if h == nil { + h = func(code int, text string) error { + message := FormatCloseMessage(code, "") + c.WriteControl(CloseMessage, message, time.Now().Add(writeWait)) + return nil + } + } + c.handleClose = h +} + +// PingHandler returns the current ping handler +func (c *Conn) PingHandler() func(appData string) error { + return c.handlePing +} + +// SetPingHandler sets the handler for ping messages received from the peer. +// The appData argument to h is the PING message application data. The default +// ping handler sends a pong to the peer. +// +// The handler function is called from the NextReader, ReadMessage and message +// reader Read methods. The application must read the connection to process +// ping messages as described in the section on Control Messages above. +func (c *Conn) SetPingHandler(h func(appData string) error) { + if h == nil { + h = func(message string) error { + err := c.WriteControl(PongMessage, []byte(message), time.Now().Add(writeWait)) + if err == ErrCloseSent { + return nil + } else if e, ok := err.(net.Error); ok && e.Temporary() { + return nil + } + return err + } + } + c.handlePing = h +} + +// PongHandler returns the current pong handler +func (c *Conn) PongHandler() func(appData string) error { + return c.handlePong +} + +// SetPongHandler sets the handler for pong messages received from the peer. +// The appData argument to h is the PONG message application data. The default +// pong handler does nothing. +// +// The handler function is called from the NextReader, ReadMessage and message +// reader Read methods. The application must read the connection to process +// pong messages as described in the section on Control Messages above. +func (c *Conn) SetPongHandler(h func(appData string) error) { + if h == nil { + h = func(string) error { return nil } + } + c.handlePong = h +} + +// UnderlyingConn returns the internal net.Conn. This can be used to further +// modifications to connection specific flags. +func (c *Conn) UnderlyingConn() net.Conn { + return c.conn +} + +// EnableWriteCompression enables and disables write compression of +// subsequent text and binary messages. This function is a noop if +// compression was not negotiated with the peer. +func (c *Conn) EnableWriteCompression(enable bool) { + c.enableWriteCompression = enable +} + +// SetCompressionLevel sets the flate compression level for subsequent text and +// binary messages. This function is a noop if compression was not negotiated +// with the peer. See the compress/flate package for a description of +// compression levels. +func (c *Conn) SetCompressionLevel(level int) error { + if !isValidCompressionLevel(level) { + return errors.New("websocket: invalid compression level") + } + c.compressionLevel = level + return nil +} + +// FormatCloseMessage formats closeCode and text as a WebSocket close message. +// An empty message is returned for code CloseNoStatusReceived. +func FormatCloseMessage(closeCode int, text string) []byte { + if closeCode == CloseNoStatusReceived { + // Return empty message because it's illegal to send + // CloseNoStatusReceived. Return non-nil value in case application + // checks for nil. + return []byte{} + } + buf := make([]byte, 2+len(text)) + binary.BigEndian.PutUint16(buf, uint16(closeCode)) + copy(buf[2:], text) + return buf +} diff --git a/vendor/github.com/gorilla/websocket/conn_write.go b/vendor/github.com/gorilla/websocket/conn_write.go new file mode 100644 index 000000000000..a509a21f87af --- /dev/null +++ b/vendor/github.com/gorilla/websocket/conn_write.go @@ -0,0 +1,15 @@ +// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.8 + +package websocket + +import "net" + +func (c *Conn) writeBufs(bufs ...[]byte) error { + b := net.Buffers(bufs) + _, err := b.WriteTo(c.conn) + return err +} diff --git a/vendor/github.com/gorilla/websocket/conn_write_legacy.go b/vendor/github.com/gorilla/websocket/conn_write_legacy.go new file mode 100644 index 000000000000..37edaff5a578 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/conn_write_legacy.go @@ -0,0 +1,18 @@ +// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.8 + +package websocket + +func (c *Conn) writeBufs(bufs ...[]byte) error { + for _, buf := range bufs { + if len(buf) > 0 { + if _, err := c.conn.Write(buf); err != nil { + return err + } + } + } + return nil +} diff --git a/vendor/github.com/gorilla/websocket/doc.go b/vendor/github.com/gorilla/websocket/doc.go new file mode 100644 index 000000000000..c6f4df8960ff --- /dev/null +++ b/vendor/github.com/gorilla/websocket/doc.go @@ -0,0 +1,227 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package websocket implements the WebSocket protocol defined in RFC 6455. +// +// Overview +// +// The Conn type represents a WebSocket connection. A server application calls +// the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: +// +// var upgrader = websocket.Upgrader{ +// ReadBufferSize: 1024, +// WriteBufferSize: 1024, +// } +// +// func handler(w http.ResponseWriter, r *http.Request) { +// conn, err := upgrader.Upgrade(w, r, nil) +// if err != nil { +// log.Println(err) +// return +// } +// ... Use conn to send and receive messages. +// } +// +// Call the connection's WriteMessage and ReadMessage methods to send and +// receive messages as a slice of bytes. This snippet of code shows how to echo +// messages using these methods: +// +// for { +// messageType, p, err := conn.ReadMessage() +// if err != nil { +// log.Println(err) +// return +// } +// if err := conn.WriteMessage(messageType, p); err != nil { +// log.Println(err) +// return +// } +// } +// +// In above snippet of code, p is a []byte and messageType is an int with value +// websocket.BinaryMessage or websocket.TextMessage. +// +// An application can also send and receive messages using the io.WriteCloser +// and io.Reader interfaces. To send a message, call the connection NextWriter +// method to get an io.WriteCloser, write the message to the writer and close +// the writer when done. To receive a message, call the connection NextReader +// method to get an io.Reader and read until io.EOF is returned. This snippet +// shows how to echo messages using the NextWriter and NextReader methods: +// +// for { +// messageType, r, err := conn.NextReader() +// if err != nil { +// return +// } +// w, err := conn.NextWriter(messageType) +// if err != nil { +// return err +// } +// if _, err := io.Copy(w, r); err != nil { +// return err +// } +// if err := w.Close(); err != nil { +// return err +// } +// } +// +// Data Messages +// +// The WebSocket protocol distinguishes between text and binary data messages. +// Text messages are interpreted as UTF-8 encoded text. The interpretation of +// binary messages is left to the application. +// +// This package uses the TextMessage and BinaryMessage integer constants to +// identify the two data message types. The ReadMessage and NextReader methods +// return the type of the received message. The messageType argument to the +// WriteMessage and NextWriter methods specifies the type of a sent message. +// +// It is the application's responsibility to ensure that text messages are +// valid UTF-8 encoded text. +// +// Control Messages +// +// The WebSocket protocol defines three types of control messages: close, ping +// and pong. Call the connection WriteControl, WriteMessage or NextWriter +// methods to send a control message to the peer. +// +// Connections handle received close messages by calling the handler function +// set with the SetCloseHandler method and by returning a *CloseError from the +// NextReader, ReadMessage or the message Read method. The default close +// handler sends a close message to the peer. +// +// Connections handle received ping messages by calling the handler function +// set with the SetPingHandler method. The default ping handler sends a pong +// message to the peer. +// +// Connections handle received pong messages by calling the handler function +// set with the SetPongHandler method. The default pong handler does nothing. +// If an application sends ping messages, then the application should set a +// pong handler to receive the corresponding pong. +// +// The control message handler functions are called from the NextReader, +// ReadMessage and message reader Read methods. The default close and ping +// handlers can block these methods for a short time when the handler writes to +// the connection. +// +// The application must read the connection to process close, ping and pong +// messages sent from the peer. If the application is not otherwise interested +// in messages from the peer, then the application should start a goroutine to +// read and discard messages from the peer. A simple example is: +// +// func readLoop(c *websocket.Conn) { +// for { +// if _, _, err := c.NextReader(); err != nil { +// c.Close() +// break +// } +// } +// } +// +// Concurrency +// +// Connections support one concurrent reader and one concurrent writer. +// +// Applications are responsible for ensuring that no more than one goroutine +// calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, +// WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and +// that no more than one goroutine calls the read methods (NextReader, +// SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) +// concurrently. +// +// The Close and WriteControl methods can be called concurrently with all other +// methods. +// +// Origin Considerations +// +// Web browsers allow Javascript applications to open a WebSocket connection to +// any host. It's up to the server to enforce an origin policy using the Origin +// request header sent by the browser. +// +// The Upgrader calls the function specified in the CheckOrigin field to check +// the origin. If the CheckOrigin function returns false, then the Upgrade +// method fails the WebSocket handshake with HTTP status 403. +// +// If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail +// the handshake if the Origin request header is present and the Origin host is +// not equal to the Host request header. +// +// The deprecated package-level Upgrade function does not perform origin +// checking. The application is responsible for checking the Origin header +// before calling the Upgrade function. +// +// Buffers +// +// Connections buffer network input and output to reduce the number +// of system calls when reading or writing messages. +// +// Write buffers are also used for constructing WebSocket frames. See RFC 6455, +// Section 5 for a discussion of message framing. A WebSocket frame header is +// written to the network each time a write buffer is flushed to the network. +// Decreasing the size of the write buffer can increase the amount of framing +// overhead on the connection. +// +// The buffer sizes in bytes are specified by the ReadBufferSize and +// WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default +// size of 4096 when a buffer size field is set to zero. The Upgrader reuses +// buffers created by the HTTP server when a buffer size field is set to zero. +// The HTTP server buffers have a size of 4096 at the time of this writing. +// +// The buffer sizes do not limit the size of a message that can be read or +// written by a connection. +// +// Buffers are held for the lifetime of the connection by default. If the +// Dialer or Upgrader WriteBufferPool field is set, then a connection holds the +// write buffer only when writing a message. +// +// Applications should tune the buffer sizes to balance memory use and +// performance. Increasing the buffer size uses more memory, but can reduce the +// number of system calls to read or write the network. In the case of writing, +// increasing the buffer size can reduce the number of frame headers written to +// the network. +// +// Some guidelines for setting buffer parameters are: +// +// Limit the buffer sizes to the maximum expected message size. Buffers larger +// than the largest message do not provide any benefit. +// +// Depending on the distribution of message sizes, setting the buffer size to +// to a value less than the maximum expected message size can greatly reduce +// memory use with a small impact on performance. Here's an example: If 99% of +// the messages are smaller than 256 bytes and the maximum message size is 512 +// bytes, then a buffer size of 256 bytes will result in 1.01 more system calls +// than a buffer size of 512 bytes. The memory savings is 50%. +// +// A write buffer pool is useful when the application has a modest number +// writes over a large number of connections. when buffers are pooled, a larger +// buffer size has a reduced impact on total memory use and has the benefit of +// reducing system calls and frame overhead. +// +// Compression EXPERIMENTAL +// +// Per message compression extensions (RFC 7692) are experimentally supported +// by this package in a limited capacity. Setting the EnableCompression option +// to true in Dialer or Upgrader will attempt to negotiate per message deflate +// support. +// +// var upgrader = websocket.Upgrader{ +// EnableCompression: true, +// } +// +// If compression was successfully negotiated with the connection's peer, any +// message received in compressed form will be automatically decompressed. +// All Read methods will return uncompressed bytes. +// +// Per message compression of messages written to a connection can be enabled +// or disabled by calling the corresponding Conn method: +// +// conn.EnableWriteCompression(false) +// +// Currently this package does not support compression with "context takeover". +// This means that messages must be compressed and decompressed in isolation, +// without retaining sliding window or dictionary state across messages. For +// more details refer to RFC 7692. +// +// Use of compression is experimental and may result in decreased performance. +package websocket diff --git a/vendor/github.com/gorilla/websocket/go.mod b/vendor/github.com/gorilla/websocket/go.mod new file mode 100644 index 000000000000..1a7afd5028a7 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/go.mod @@ -0,0 +1,3 @@ +module github.com/gorilla/websocket + +go 1.12 diff --git a/vendor/github.com/gorilla/websocket/go.sum b/vendor/github.com/gorilla/websocket/go.sum new file mode 100644 index 000000000000..cf4fbbaa07ac --- /dev/null +++ b/vendor/github.com/gorilla/websocket/go.sum @@ -0,0 +1,2 @@ +github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q= +github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= diff --git a/vendor/github.com/gorilla/websocket/join.go b/vendor/github.com/gorilla/websocket/join.go new file mode 100644 index 000000000000..c64f8c82901a --- /dev/null +++ b/vendor/github.com/gorilla/websocket/join.go @@ -0,0 +1,42 @@ +// Copyright 2019 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "io" + "strings" +) + +// JoinMessages concatenates received messages to create a single io.Reader. +// The string term is appended to each message. The returned reader does not +// support concurrent calls to the Read method. +func JoinMessages(c *Conn, term string) io.Reader { + return &joinReader{c: c, term: term} +} + +type joinReader struct { + c *Conn + term string + r io.Reader +} + +func (r *joinReader) Read(p []byte) (int, error) { + if r.r == nil { + var err error + _, r.r, err = r.c.NextReader() + if err != nil { + return 0, err + } + if r.term != "" { + r.r = io.MultiReader(r.r, strings.NewReader(r.term)) + } + } + n, err := r.r.Read(p) + if err == io.EOF { + err = nil + r.r = nil + } + return n, err +} diff --git a/vendor/github.com/gorilla/websocket/json.go b/vendor/github.com/gorilla/websocket/json.go new file mode 100644 index 000000000000..dc2c1f6415ff --- /dev/null +++ b/vendor/github.com/gorilla/websocket/json.go @@ -0,0 +1,60 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "encoding/json" + "io" +) + +// WriteJSON writes the JSON encoding of v as a message. +// +// Deprecated: Use c.WriteJSON instead. +func WriteJSON(c *Conn, v interface{}) error { + return c.WriteJSON(v) +} + +// WriteJSON writes the JSON encoding of v as a message. +// +// See the documentation for encoding/json Marshal for details about the +// conversion of Go values to JSON. +func (c *Conn) WriteJSON(v interface{}) error { + w, err := c.NextWriter(TextMessage) + if err != nil { + return err + } + err1 := json.NewEncoder(w).Encode(v) + err2 := w.Close() + if err1 != nil { + return err1 + } + return err2 +} + +// ReadJSON reads the next JSON-encoded message from the connection and stores +// it in the value pointed to by v. +// +// Deprecated: Use c.ReadJSON instead. +func ReadJSON(c *Conn, v interface{}) error { + return c.ReadJSON(v) +} + +// ReadJSON reads the next JSON-encoded message from the connection and stores +// it in the value pointed to by v. +// +// See the documentation for the encoding/json Unmarshal function for details +// about the conversion of JSON to a Go value. +func (c *Conn) ReadJSON(v interface{}) error { + _, r, err := c.NextReader() + if err != nil { + return err + } + err = json.NewDecoder(r).Decode(v) + if err == io.EOF { + // One value is expected in the message. + err = io.ErrUnexpectedEOF + } + return err +} diff --git a/vendor/github.com/gorilla/websocket/mask.go b/vendor/github.com/gorilla/websocket/mask.go new file mode 100644 index 000000000000..577fce9efd72 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/mask.go @@ -0,0 +1,54 @@ +// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. Use of +// this source code is governed by a BSD-style license that can be found in the +// LICENSE file. + +// +build !appengine + +package websocket + +import "unsafe" + +const wordSize = int(unsafe.Sizeof(uintptr(0))) + +func maskBytes(key [4]byte, pos int, b []byte) int { + // Mask one byte at a time for small buffers. + if len(b) < 2*wordSize { + for i := range b { + b[i] ^= key[pos&3] + pos++ + } + return pos & 3 + } + + // Mask one byte at a time to word boundary. + if n := int(uintptr(unsafe.Pointer(&b[0]))) % wordSize; n != 0 { + n = wordSize - n + for i := range b[:n] { + b[i] ^= key[pos&3] + pos++ + } + b = b[n:] + } + + // Create aligned word size key. + var k [wordSize]byte + for i := range k { + k[i] = key[(pos+i)&3] + } + kw := *(*uintptr)(unsafe.Pointer(&k)) + + // Mask one word at a time. + n := (len(b) / wordSize) * wordSize + for i := 0; i < n; i += wordSize { + *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&b[0])) + uintptr(i))) ^= kw + } + + // Mask one byte at a time for remaining bytes. + b = b[n:] + for i := range b { + b[i] ^= key[pos&3] + pos++ + } + + return pos & 3 +} diff --git a/vendor/github.com/gorilla/websocket/mask_safe.go b/vendor/github.com/gorilla/websocket/mask_safe.go new file mode 100644 index 000000000000..2aac060e52e7 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/mask_safe.go @@ -0,0 +1,15 @@ +// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. Use of +// this source code is governed by a BSD-style license that can be found in the +// LICENSE file. + +// +build appengine + +package websocket + +func maskBytes(key [4]byte, pos int, b []byte) int { + for i := range b { + b[i] ^= key[pos&3] + pos++ + } + return pos & 3 +} diff --git a/vendor/github.com/gorilla/websocket/prepared.go b/vendor/github.com/gorilla/websocket/prepared.go new file mode 100644 index 000000000000..74ec565d2c38 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/prepared.go @@ -0,0 +1,102 @@ +// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bytes" + "net" + "sync" + "time" +) + +// PreparedMessage caches on the wire representations of a message payload. +// Use PreparedMessage to efficiently send a message payload to multiple +// connections. PreparedMessage is especially useful when compression is used +// because the CPU and memory expensive compression operation can be executed +// once for a given set of compression options. +type PreparedMessage struct { + messageType int + data []byte + mu sync.Mutex + frames map[prepareKey]*preparedFrame +} + +// prepareKey defines a unique set of options to cache prepared frames in PreparedMessage. +type prepareKey struct { + isServer bool + compress bool + compressionLevel int +} + +// preparedFrame contains data in wire representation. +type preparedFrame struct { + once sync.Once + data []byte +} + +// NewPreparedMessage returns an initialized PreparedMessage. You can then send +// it to connection using WritePreparedMessage method. Valid wire +// representation will be calculated lazily only once for a set of current +// connection options. +func NewPreparedMessage(messageType int, data []byte) (*PreparedMessage, error) { + pm := &PreparedMessage{ + messageType: messageType, + frames: make(map[prepareKey]*preparedFrame), + data: data, + } + + // Prepare a plain server frame. + _, frameData, err := pm.frame(prepareKey{isServer: true, compress: false}) + if err != nil { + return nil, err + } + + // To protect against caller modifying the data argument, remember the data + // copied to the plain server frame. + pm.data = frameData[len(frameData)-len(data):] + return pm, nil +} + +func (pm *PreparedMessage) frame(key prepareKey) (int, []byte, error) { + pm.mu.Lock() + frame, ok := pm.frames[key] + if !ok { + frame = &preparedFrame{} + pm.frames[key] = frame + } + pm.mu.Unlock() + + var err error + frame.once.Do(func() { + // Prepare a frame using a 'fake' connection. + // TODO: Refactor code in conn.go to allow more direct construction of + // the frame. + mu := make(chan bool, 1) + mu <- true + var nc prepareConn + c := &Conn{ + conn: &nc, + mu: mu, + isServer: key.isServer, + compressionLevel: key.compressionLevel, + enableWriteCompression: true, + writeBuf: make([]byte, defaultWriteBufferSize+maxFrameHeaderSize), + } + if key.compress { + c.newCompressionWriter = compressNoContextTakeover + } + err = c.WriteMessage(pm.messageType, pm.data) + frame.data = nc.buf.Bytes() + }) + return pm.messageType, frame.data, err +} + +type prepareConn struct { + buf bytes.Buffer + net.Conn +} + +func (pc *prepareConn) Write(p []byte) (int, error) { return pc.buf.Write(p) } +func (pc *prepareConn) SetWriteDeadline(t time.Time) error { return nil } diff --git a/vendor/github.com/gorilla/websocket/proxy.go b/vendor/github.com/gorilla/websocket/proxy.go new file mode 100644 index 000000000000..e87a8c9f0c96 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/proxy.go @@ -0,0 +1,77 @@ +// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bufio" + "encoding/base64" + "errors" + "net" + "net/http" + "net/url" + "strings" +) + +type netDialerFunc func(network, addr string) (net.Conn, error) + +func (fn netDialerFunc) Dial(network, addr string) (net.Conn, error) { + return fn(network, addr) +} + +func init() { + proxy_RegisterDialerType("http", func(proxyURL *url.URL, forwardDialer proxy_Dialer) (proxy_Dialer, error) { + return &httpProxyDialer{proxyURL: proxyURL, forwardDial: forwardDialer.Dial}, nil + }) +} + +type httpProxyDialer struct { + proxyURL *url.URL + forwardDial func(network, addr string) (net.Conn, error) +} + +func (hpd *httpProxyDialer) Dial(network string, addr string) (net.Conn, error) { + hostPort, _ := hostPortNoPort(hpd.proxyURL) + conn, err := hpd.forwardDial(network, hostPort) + if err != nil { + return nil, err + } + + connectHeader := make(http.Header) + if user := hpd.proxyURL.User; user != nil { + proxyUser := user.Username() + if proxyPassword, passwordSet := user.Password(); passwordSet { + credential := base64.StdEncoding.EncodeToString([]byte(proxyUser + ":" + proxyPassword)) + connectHeader.Set("Proxy-Authorization", "Basic "+credential) + } + } + + connectReq := &http.Request{ + Method: "CONNECT", + URL: &url.URL{Opaque: addr}, + Host: addr, + Header: connectHeader, + } + + if err := connectReq.Write(conn); err != nil { + conn.Close() + return nil, err + } + + // Read response. It's OK to use and discard buffered reader here becaue + // the remote server does not speak until spoken to. + br := bufio.NewReader(conn) + resp, err := http.ReadResponse(br, connectReq) + if err != nil { + conn.Close() + return nil, err + } + + if resp.StatusCode != 200 { + conn.Close() + f := strings.SplitN(resp.Status, " ", 2) + return nil, errors.New(f[1]) + } + return conn, nil +} diff --git a/vendor/github.com/gorilla/websocket/server.go b/vendor/github.com/gorilla/websocket/server.go new file mode 100644 index 000000000000..887d558918c7 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/server.go @@ -0,0 +1,363 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bufio" + "errors" + "io" + "net/http" + "net/url" + "strings" + "time" +) + +// HandshakeError describes an error with the handshake from the peer. +type HandshakeError struct { + message string +} + +func (e HandshakeError) Error() string { return e.message } + +// Upgrader specifies parameters for upgrading an HTTP connection to a +// WebSocket connection. +type Upgrader struct { + // HandshakeTimeout specifies the duration for the handshake to complete. + HandshakeTimeout time.Duration + + // ReadBufferSize and WriteBufferSize specify I/O buffer sizes in bytes. If a buffer + // size is zero, then buffers allocated by the HTTP server are used. The + // I/O buffer sizes do not limit the size of the messages that can be sent + // or received. + ReadBufferSize, WriteBufferSize int + + // WriteBufferPool is a pool of buffers for write operations. If the value + // is not set, then write buffers are allocated to the connection for the + // lifetime of the connection. + // + // A pool is most useful when the application has a modest volume of writes + // across a large number of connections. + // + // Applications should use a single pool for each unique value of + // WriteBufferSize. + WriteBufferPool BufferPool + + // Subprotocols specifies the server's supported protocols in order of + // preference. If this field is not nil, then the Upgrade method negotiates a + // subprotocol by selecting the first match in this list with a protocol + // requested by the client. If there's no match, then no protocol is + // negotiated (the Sec-Websocket-Protocol header is not included in the + // handshake response). + Subprotocols []string + + // Error specifies the function for generating HTTP error responses. If Error + // is nil, then http.Error is used to generate the HTTP response. + Error func(w http.ResponseWriter, r *http.Request, status int, reason error) + + // CheckOrigin returns true if the request Origin header is acceptable. If + // CheckOrigin is nil, then a safe default is used: return false if the + // Origin request header is present and the origin host is not equal to + // request Host header. + // + // A CheckOrigin function should carefully validate the request origin to + // prevent cross-site request forgery. + CheckOrigin func(r *http.Request) bool + + // EnableCompression specify if the server should attempt to negotiate per + // message compression (RFC 7692). Setting this value to true does not + // guarantee that compression will be supported. Currently only "no context + // takeover" modes are supported. + EnableCompression bool +} + +func (u *Upgrader) returnError(w http.ResponseWriter, r *http.Request, status int, reason string) (*Conn, error) { + err := HandshakeError{reason} + if u.Error != nil { + u.Error(w, r, status, err) + } else { + w.Header().Set("Sec-Websocket-Version", "13") + http.Error(w, http.StatusText(status), status) + } + return nil, err +} + +// checkSameOrigin returns true if the origin is not set or is equal to the request host. +func checkSameOrigin(r *http.Request) bool { + origin := r.Header["Origin"] + if len(origin) == 0 { + return true + } + u, err := url.Parse(origin[0]) + if err != nil { + return false + } + return equalASCIIFold(u.Host, r.Host) +} + +func (u *Upgrader) selectSubprotocol(r *http.Request, responseHeader http.Header) string { + if u.Subprotocols != nil { + clientProtocols := Subprotocols(r) + for _, serverProtocol := range u.Subprotocols { + for _, clientProtocol := range clientProtocols { + if clientProtocol == serverProtocol { + return clientProtocol + } + } + } + } else if responseHeader != nil { + return responseHeader.Get("Sec-Websocket-Protocol") + } + return "" +} + +// Upgrade upgrades the HTTP server connection to the WebSocket protocol. +// +// The responseHeader is included in the response to the client's upgrade +// request. Use the responseHeader to specify cookies (Set-Cookie) and the +// application negotiated subprotocol (Sec-WebSocket-Protocol). +// +// If the upgrade fails, then Upgrade replies to the client with an HTTP error +// response. +func (u *Upgrader) Upgrade(w http.ResponseWriter, r *http.Request, responseHeader http.Header) (*Conn, error) { + const badHandshake = "websocket: the client is not using the websocket protocol: " + + if !tokenListContainsValue(r.Header, "Connection", "upgrade") { + return u.returnError(w, r, http.StatusBadRequest, badHandshake+"'upgrade' token not found in 'Connection' header") + } + + if !tokenListContainsValue(r.Header, "Upgrade", "websocket") { + return u.returnError(w, r, http.StatusBadRequest, badHandshake+"'websocket' token not found in 'Upgrade' header") + } + + if r.Method != "GET" { + return u.returnError(w, r, http.StatusMethodNotAllowed, badHandshake+"request method is not GET") + } + + if !tokenListContainsValue(r.Header, "Sec-Websocket-Version", "13") { + return u.returnError(w, r, http.StatusBadRequest, "websocket: unsupported version: 13 not found in 'Sec-Websocket-Version' header") + } + + if _, ok := responseHeader["Sec-Websocket-Extensions"]; ok { + return u.returnError(w, r, http.StatusInternalServerError, "websocket: application specific 'Sec-WebSocket-Extensions' headers are unsupported") + } + + checkOrigin := u.CheckOrigin + if checkOrigin == nil { + checkOrigin = checkSameOrigin + } + if !checkOrigin(r) { + return u.returnError(w, r, http.StatusForbidden, "websocket: request origin not allowed by Upgrader.CheckOrigin") + } + + challengeKey := r.Header.Get("Sec-Websocket-Key") + if challengeKey == "" { + return u.returnError(w, r, http.StatusBadRequest, "websocket: not a websocket handshake: 'Sec-WebSocket-Key' header is missing or blank") + } + + subprotocol := u.selectSubprotocol(r, responseHeader) + + // Negotiate PMCE + var compress bool + if u.EnableCompression { + for _, ext := range parseExtensions(r.Header) { + if ext[""] != "permessage-deflate" { + continue + } + compress = true + break + } + } + + h, ok := w.(http.Hijacker) + if !ok { + return u.returnError(w, r, http.StatusInternalServerError, "websocket: response does not implement http.Hijacker") + } + var brw *bufio.ReadWriter + netConn, brw, err := h.Hijack() + if err != nil { + return u.returnError(w, r, http.StatusInternalServerError, err.Error()) + } + + if brw.Reader.Buffered() > 0 { + netConn.Close() + return nil, errors.New("websocket: client sent data before handshake is complete") + } + + var br *bufio.Reader + if u.ReadBufferSize == 0 && bufioReaderSize(netConn, brw.Reader) > 256 { + // Reuse hijacked buffered reader as connection reader. + br = brw.Reader + } + + buf := bufioWriterBuffer(netConn, brw.Writer) + + var writeBuf []byte + if u.WriteBufferPool == nil && u.WriteBufferSize == 0 && len(buf) >= maxFrameHeaderSize+256 { + // Reuse hijacked write buffer as connection buffer. + writeBuf = buf + } + + c := newConn(netConn, true, u.ReadBufferSize, u.WriteBufferSize, u.WriteBufferPool, br, writeBuf) + c.subprotocol = subprotocol + + if compress { + c.newCompressionWriter = compressNoContextTakeover + c.newDecompressionReader = decompressNoContextTakeover + } + + // Use larger of hijacked buffer and connection write buffer for header. + p := buf + if len(c.writeBuf) > len(p) { + p = c.writeBuf + } + p = p[:0] + + p = append(p, "HTTP/1.1 101 Switching Protocols\r\nUpgrade: websocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept: "...) + p = append(p, computeAcceptKey(challengeKey)...) + p = append(p, "\r\n"...) + if c.subprotocol != "" { + p = append(p, "Sec-WebSocket-Protocol: "...) + p = append(p, c.subprotocol...) + p = append(p, "\r\n"...) + } + if compress { + p = append(p, "Sec-WebSocket-Extensions: permessage-deflate; server_no_context_takeover; client_no_context_takeover\r\n"...) + } + for k, vs := range responseHeader { + if k == "Sec-Websocket-Protocol" { + continue + } + for _, v := range vs { + p = append(p, k...) + p = append(p, ": "...) + for i := 0; i < len(v); i++ { + b := v[i] + if b <= 31 { + // prevent response splitting. + b = ' ' + } + p = append(p, b) + } + p = append(p, "\r\n"...) + } + } + p = append(p, "\r\n"...) + + // Clear deadlines set by HTTP server. + netConn.SetDeadline(time.Time{}) + + if u.HandshakeTimeout > 0 { + netConn.SetWriteDeadline(time.Now().Add(u.HandshakeTimeout)) + } + if _, err = netConn.Write(p); err != nil { + netConn.Close() + return nil, err + } + if u.HandshakeTimeout > 0 { + netConn.SetWriteDeadline(time.Time{}) + } + + return c, nil +} + +// Upgrade upgrades the HTTP server connection to the WebSocket protocol. +// +// Deprecated: Use websocket.Upgrader instead. +// +// Upgrade does not perform origin checking. The application is responsible for +// checking the Origin header before calling Upgrade. An example implementation +// of the same origin policy check is: +// +// if req.Header.Get("Origin") != "http://"+req.Host { +// http.Error(w, "Origin not allowed", http.StatusForbidden) +// return +// } +// +// If the endpoint supports subprotocols, then the application is responsible +// for negotiating the protocol used on the connection. Use the Subprotocols() +// function to get the subprotocols requested by the client. Use the +// Sec-Websocket-Protocol response header to specify the subprotocol selected +// by the application. +// +// The responseHeader is included in the response to the client's upgrade +// request. Use the responseHeader to specify cookies (Set-Cookie) and the +// negotiated subprotocol (Sec-Websocket-Protocol). +// +// The connection buffers IO to the underlying network connection. The +// readBufSize and writeBufSize parameters specify the size of the buffers to +// use. Messages can be larger than the buffers. +// +// If the request is not a valid WebSocket handshake, then Upgrade returns an +// error of type HandshakeError. Applications should handle this error by +// replying to the client with an HTTP error response. +func Upgrade(w http.ResponseWriter, r *http.Request, responseHeader http.Header, readBufSize, writeBufSize int) (*Conn, error) { + u := Upgrader{ReadBufferSize: readBufSize, WriteBufferSize: writeBufSize} + u.Error = func(w http.ResponseWriter, r *http.Request, status int, reason error) { + // don't return errors to maintain backwards compatibility + } + u.CheckOrigin = func(r *http.Request) bool { + // allow all connections by default + return true + } + return u.Upgrade(w, r, responseHeader) +} + +// Subprotocols returns the subprotocols requested by the client in the +// Sec-Websocket-Protocol header. +func Subprotocols(r *http.Request) []string { + h := strings.TrimSpace(r.Header.Get("Sec-Websocket-Protocol")) + if h == "" { + return nil + } + protocols := strings.Split(h, ",") + for i := range protocols { + protocols[i] = strings.TrimSpace(protocols[i]) + } + return protocols +} + +// IsWebSocketUpgrade returns true if the client requested upgrade to the +// WebSocket protocol. +func IsWebSocketUpgrade(r *http.Request) bool { + return tokenListContainsValue(r.Header, "Connection", "upgrade") && + tokenListContainsValue(r.Header, "Upgrade", "websocket") +} + +// bufioReaderSize size returns the size of a bufio.Reader. +func bufioReaderSize(originalReader io.Reader, br *bufio.Reader) int { + // This code assumes that peek on a reset reader returns + // bufio.Reader.buf[:0]. + // TODO: Use bufio.Reader.Size() after Go 1.10 + br.Reset(originalReader) + if p, err := br.Peek(0); err == nil { + return cap(p) + } + return 0 +} + +// writeHook is an io.Writer that records the last slice passed to it vio +// io.Writer.Write. +type writeHook struct { + p []byte +} + +func (wh *writeHook) Write(p []byte) (int, error) { + wh.p = p + return len(p), nil +} + +// bufioWriterBuffer grabs the buffer from a bufio.Writer. +func bufioWriterBuffer(originalWriter io.Writer, bw *bufio.Writer) []byte { + // This code assumes that bufio.Writer.buf[:1] is passed to the + // bufio.Writer's underlying writer. + var wh writeHook + bw.Reset(&wh) + bw.WriteByte(0) + bw.Flush() + + bw.Reset(originalWriter) + + return wh.p[:cap(wh.p)] +} diff --git a/vendor/github.com/gorilla/websocket/trace.go b/vendor/github.com/gorilla/websocket/trace.go new file mode 100644 index 000000000000..834f122a00db --- /dev/null +++ b/vendor/github.com/gorilla/websocket/trace.go @@ -0,0 +1,19 @@ +// +build go1.8 + +package websocket + +import ( + "crypto/tls" + "net/http/httptrace" +) + +func doHandshakeWithTrace(trace *httptrace.ClientTrace, tlsConn *tls.Conn, cfg *tls.Config) error { + if trace.TLSHandshakeStart != nil { + trace.TLSHandshakeStart() + } + err := doHandshake(tlsConn, cfg) + if trace.TLSHandshakeDone != nil { + trace.TLSHandshakeDone(tlsConn.ConnectionState(), err) + } + return err +} diff --git a/vendor/github.com/gorilla/websocket/trace_17.go b/vendor/github.com/gorilla/websocket/trace_17.go new file mode 100644 index 000000000000..77d05a0b5748 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/trace_17.go @@ -0,0 +1,12 @@ +// +build !go1.8 + +package websocket + +import ( + "crypto/tls" + "net/http/httptrace" +) + +func doHandshakeWithTrace(trace *httptrace.ClientTrace, tlsConn *tls.Conn, cfg *tls.Config) error { + return doHandshake(tlsConn, cfg) +} diff --git a/vendor/github.com/gorilla/websocket/util.go b/vendor/github.com/gorilla/websocket/util.go new file mode 100644 index 000000000000..7bf2f66c6747 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/util.go @@ -0,0 +1,283 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "crypto/rand" + "crypto/sha1" + "encoding/base64" + "io" + "net/http" + "strings" + "unicode/utf8" +) + +var keyGUID = []byte("258EAFA5-E914-47DA-95CA-C5AB0DC85B11") + +func computeAcceptKey(challengeKey string) string { + h := sha1.New() + h.Write([]byte(challengeKey)) + h.Write(keyGUID) + return base64.StdEncoding.EncodeToString(h.Sum(nil)) +} + +func generateChallengeKey() (string, error) { + p := make([]byte, 16) + if _, err := io.ReadFull(rand.Reader, p); err != nil { + return "", err + } + return base64.StdEncoding.EncodeToString(p), nil +} + +// Token octets per RFC 2616. +var isTokenOctet = [256]bool{ + '!': true, + '#': true, + '$': true, + '%': true, + '&': true, + '\'': true, + '*': true, + '+': true, + '-': true, + '.': true, + '0': true, + '1': true, + '2': true, + '3': true, + '4': true, + '5': true, + '6': true, + '7': true, + '8': true, + '9': true, + 'A': true, + 'B': true, + 'C': true, + 'D': true, + 'E': true, + 'F': true, + 'G': true, + 'H': true, + 'I': true, + 'J': true, + 'K': true, + 'L': true, + 'M': true, + 'N': true, + 'O': true, + 'P': true, + 'Q': true, + 'R': true, + 'S': true, + 'T': true, + 'U': true, + 'W': true, + 'V': true, + 'X': true, + 'Y': true, + 'Z': true, + '^': true, + '_': true, + '`': true, + 'a': true, + 'b': true, + 'c': true, + 'd': true, + 'e': true, + 'f': true, + 'g': true, + 'h': true, + 'i': true, + 'j': true, + 'k': true, + 'l': true, + 'm': true, + 'n': true, + 'o': true, + 'p': true, + 'q': true, + 'r': true, + 's': true, + 't': true, + 'u': true, + 'v': true, + 'w': true, + 'x': true, + 'y': true, + 'z': true, + '|': true, + '~': true, +} + +// skipSpace returns a slice of the string s with all leading RFC 2616 linear +// whitespace removed. +func skipSpace(s string) (rest string) { + i := 0 + for ; i < len(s); i++ { + if b := s[i]; b != ' ' && b != '\t' { + break + } + } + return s[i:] +} + +// nextToken returns the leading RFC 2616 token of s and the string following +// the token. +func nextToken(s string) (token, rest string) { + i := 0 + for ; i < len(s); i++ { + if !isTokenOctet[s[i]] { + break + } + } + return s[:i], s[i:] +} + +// nextTokenOrQuoted returns the leading token or quoted string per RFC 2616 +// and the string following the token or quoted string. +func nextTokenOrQuoted(s string) (value string, rest string) { + if !strings.HasPrefix(s, "\"") { + return nextToken(s) + } + s = s[1:] + for i := 0; i < len(s); i++ { + switch s[i] { + case '"': + return s[:i], s[i+1:] + case '\\': + p := make([]byte, len(s)-1) + j := copy(p, s[:i]) + escape := true + for i = i + 1; i < len(s); i++ { + b := s[i] + switch { + case escape: + escape = false + p[j] = b + j++ + case b == '\\': + escape = true + case b == '"': + return string(p[:j]), s[i+1:] + default: + p[j] = b + j++ + } + } + return "", "" + } + } + return "", "" +} + +// equalASCIIFold returns true if s is equal to t with ASCII case folding as +// defined in RFC 4790. +func equalASCIIFold(s, t string) bool { + for s != "" && t != "" { + sr, size := utf8.DecodeRuneInString(s) + s = s[size:] + tr, size := utf8.DecodeRuneInString(t) + t = t[size:] + if sr == tr { + continue + } + if 'A' <= sr && sr <= 'Z' { + sr = sr + 'a' - 'A' + } + if 'A' <= tr && tr <= 'Z' { + tr = tr + 'a' - 'A' + } + if sr != tr { + return false + } + } + return s == t +} + +// tokenListContainsValue returns true if the 1#token header with the given +// name contains a token equal to value with ASCII case folding. +func tokenListContainsValue(header http.Header, name string, value string) bool { +headers: + for _, s := range header[name] { + for { + var t string + t, s = nextToken(skipSpace(s)) + if t == "" { + continue headers + } + s = skipSpace(s) + if s != "" && s[0] != ',' { + continue headers + } + if equalASCIIFold(t, value) { + return true + } + if s == "" { + continue headers + } + s = s[1:] + } + } + return false +} + +// parseExtensions parses WebSocket extensions from a header. +func parseExtensions(header http.Header) []map[string]string { + // From RFC 6455: + // + // Sec-WebSocket-Extensions = extension-list + // extension-list = 1#extension + // extension = extension-token *( ";" extension-param ) + // extension-token = registered-token + // registered-token = token + // extension-param = token [ "=" (token | quoted-string) ] + // ;When using the quoted-string syntax variant, the value + // ;after quoted-string unescaping MUST conform to the + // ;'token' ABNF. + + var result []map[string]string +headers: + for _, s := range header["Sec-Websocket-Extensions"] { + for { + var t string + t, s = nextToken(skipSpace(s)) + if t == "" { + continue headers + } + ext := map[string]string{"": t} + for { + s = skipSpace(s) + if !strings.HasPrefix(s, ";") { + break + } + var k string + k, s = nextToken(skipSpace(s[1:])) + if k == "" { + continue headers + } + s = skipSpace(s) + var v string + if strings.HasPrefix(s, "=") { + v, s = nextTokenOrQuoted(skipSpace(s[1:])) + s = skipSpace(s) + } + if s != "" && s[0] != ',' && s[0] != ';' { + continue headers + } + ext[k] = v + } + if s != "" && s[0] != ',' { + continue headers + } + result = append(result, ext) + if s == "" { + continue headers + } + s = s[1:] + } + } + return result +} diff --git a/vendor/github.com/gorilla/websocket/x_net_proxy.go b/vendor/github.com/gorilla/websocket/x_net_proxy.go new file mode 100644 index 000000000000..2e668f6b8821 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/x_net_proxy.go @@ -0,0 +1,473 @@ +// Code generated by golang.org/x/tools/cmd/bundle. DO NOT EDIT. +//go:generate bundle -o x_net_proxy.go golang.org/x/net/proxy + +// Package proxy provides support for a variety of protocols to proxy network +// data. +// + +package websocket + +import ( + "errors" + "io" + "net" + "net/url" + "os" + "strconv" + "strings" + "sync" +) + +type proxy_direct struct{} + +// Direct is a direct proxy: one that makes network connections directly. +var proxy_Direct = proxy_direct{} + +func (proxy_direct) Dial(network, addr string) (net.Conn, error) { + return net.Dial(network, addr) +} + +// A PerHost directs connections to a default Dialer unless the host name +// requested matches one of a number of exceptions. +type proxy_PerHost struct { + def, bypass proxy_Dialer + + bypassNetworks []*net.IPNet + bypassIPs []net.IP + bypassZones []string + bypassHosts []string +} + +// NewPerHost returns a PerHost Dialer that directs connections to either +// defaultDialer or bypass, depending on whether the connection matches one of +// the configured rules. +func proxy_NewPerHost(defaultDialer, bypass proxy_Dialer) *proxy_PerHost { + return &proxy_PerHost{ + def: defaultDialer, + bypass: bypass, + } +} + +// Dial connects to the address addr on the given network through either +// defaultDialer or bypass. +func (p *proxy_PerHost) Dial(network, addr string) (c net.Conn, err error) { + host, _, err := net.SplitHostPort(addr) + if err != nil { + return nil, err + } + + return p.dialerForRequest(host).Dial(network, addr) +} + +func (p *proxy_PerHost) dialerForRequest(host string) proxy_Dialer { + if ip := net.ParseIP(host); ip != nil { + for _, net := range p.bypassNetworks { + if net.Contains(ip) { + return p.bypass + } + } + for _, bypassIP := range p.bypassIPs { + if bypassIP.Equal(ip) { + return p.bypass + } + } + return p.def + } + + for _, zone := range p.bypassZones { + if strings.HasSuffix(host, zone) { + return p.bypass + } + if host == zone[1:] { + // For a zone ".example.com", we match "example.com" + // too. + return p.bypass + } + } + for _, bypassHost := range p.bypassHosts { + if bypassHost == host { + return p.bypass + } + } + return p.def +} + +// AddFromString parses a string that contains comma-separated values +// specifying hosts that should use the bypass proxy. Each value is either an +// IP address, a CIDR range, a zone (*.example.com) or a host name +// (localhost). A best effort is made to parse the string and errors are +// ignored. +func (p *proxy_PerHost) AddFromString(s string) { + hosts := strings.Split(s, ",") + for _, host := range hosts { + host = strings.TrimSpace(host) + if len(host) == 0 { + continue + } + if strings.Contains(host, "/") { + // We assume that it's a CIDR address like 127.0.0.0/8 + if _, net, err := net.ParseCIDR(host); err == nil { + p.AddNetwork(net) + } + continue + } + if ip := net.ParseIP(host); ip != nil { + p.AddIP(ip) + continue + } + if strings.HasPrefix(host, "*.") { + p.AddZone(host[1:]) + continue + } + p.AddHost(host) + } +} + +// AddIP specifies an IP address that will use the bypass proxy. Note that +// this will only take effect if a literal IP address is dialed. A connection +// to a named host will never match an IP. +func (p *proxy_PerHost) AddIP(ip net.IP) { + p.bypassIPs = append(p.bypassIPs, ip) +} + +// AddNetwork specifies an IP range that will use the bypass proxy. Note that +// this will only take effect if a literal IP address is dialed. A connection +// to a named host will never match. +func (p *proxy_PerHost) AddNetwork(net *net.IPNet) { + p.bypassNetworks = append(p.bypassNetworks, net) +} + +// AddZone specifies a DNS suffix that will use the bypass proxy. A zone of +// "example.com" matches "example.com" and all of its subdomains. +func (p *proxy_PerHost) AddZone(zone string) { + if strings.HasSuffix(zone, ".") { + zone = zone[:len(zone)-1] + } + if !strings.HasPrefix(zone, ".") { + zone = "." + zone + } + p.bypassZones = append(p.bypassZones, zone) +} + +// AddHost specifies a host name that will use the bypass proxy. +func (p *proxy_PerHost) AddHost(host string) { + if strings.HasSuffix(host, ".") { + host = host[:len(host)-1] + } + p.bypassHosts = append(p.bypassHosts, host) +} + +// A Dialer is a means to establish a connection. +type proxy_Dialer interface { + // Dial connects to the given address via the proxy. + Dial(network, addr string) (c net.Conn, err error) +} + +// Auth contains authentication parameters that specific Dialers may require. +type proxy_Auth struct { + User, Password string +} + +// FromEnvironment returns the dialer specified by the proxy related variables in +// the environment. +func proxy_FromEnvironment() proxy_Dialer { + allProxy := proxy_allProxyEnv.Get() + if len(allProxy) == 0 { + return proxy_Direct + } + + proxyURL, err := url.Parse(allProxy) + if err != nil { + return proxy_Direct + } + proxy, err := proxy_FromURL(proxyURL, proxy_Direct) + if err != nil { + return proxy_Direct + } + + noProxy := proxy_noProxyEnv.Get() + if len(noProxy) == 0 { + return proxy + } + + perHost := proxy_NewPerHost(proxy, proxy_Direct) + perHost.AddFromString(noProxy) + return perHost +} + +// proxySchemes is a map from URL schemes to a function that creates a Dialer +// from a URL with such a scheme. +var proxy_proxySchemes map[string]func(*url.URL, proxy_Dialer) (proxy_Dialer, error) + +// RegisterDialerType takes a URL scheme and a function to generate Dialers from +// a URL with that scheme and a forwarding Dialer. Registered schemes are used +// by FromURL. +func proxy_RegisterDialerType(scheme string, f func(*url.URL, proxy_Dialer) (proxy_Dialer, error)) { + if proxy_proxySchemes == nil { + proxy_proxySchemes = make(map[string]func(*url.URL, proxy_Dialer) (proxy_Dialer, error)) + } + proxy_proxySchemes[scheme] = f +} + +// FromURL returns a Dialer given a URL specification and an underlying +// Dialer for it to make network requests. +func proxy_FromURL(u *url.URL, forward proxy_Dialer) (proxy_Dialer, error) { + var auth *proxy_Auth + if u.User != nil { + auth = new(proxy_Auth) + auth.User = u.User.Username() + if p, ok := u.User.Password(); ok { + auth.Password = p + } + } + + switch u.Scheme { + case "socks5": + return proxy_SOCKS5("tcp", u.Host, auth, forward) + } + + // If the scheme doesn't match any of the built-in schemes, see if it + // was registered by another package. + if proxy_proxySchemes != nil { + if f, ok := proxy_proxySchemes[u.Scheme]; ok { + return f(u, forward) + } + } + + return nil, errors.New("proxy: unknown scheme: " + u.Scheme) +} + +var ( + proxy_allProxyEnv = &proxy_envOnce{ + names: []string{"ALL_PROXY", "all_proxy"}, + } + proxy_noProxyEnv = &proxy_envOnce{ + names: []string{"NO_PROXY", "no_proxy"}, + } +) + +// envOnce looks up an environment variable (optionally by multiple +// names) once. It mitigates expensive lookups on some platforms +// (e.g. Windows). +// (Borrowed from net/http/transport.go) +type proxy_envOnce struct { + names []string + once sync.Once + val string +} + +func (e *proxy_envOnce) Get() string { + e.once.Do(e.init) + return e.val +} + +func (e *proxy_envOnce) init() { + for _, n := range e.names { + e.val = os.Getenv(n) + if e.val != "" { + return + } + } +} + +// SOCKS5 returns a Dialer that makes SOCKSv5 connections to the given address +// with an optional username and password. See RFC 1928 and RFC 1929. +func proxy_SOCKS5(network, addr string, auth *proxy_Auth, forward proxy_Dialer) (proxy_Dialer, error) { + s := &proxy_socks5{ + network: network, + addr: addr, + forward: forward, + } + if auth != nil { + s.user = auth.User + s.password = auth.Password + } + + return s, nil +} + +type proxy_socks5 struct { + user, password string + network, addr string + forward proxy_Dialer +} + +const proxy_socks5Version = 5 + +const ( + proxy_socks5AuthNone = 0 + proxy_socks5AuthPassword = 2 +) + +const proxy_socks5Connect = 1 + +const ( + proxy_socks5IP4 = 1 + proxy_socks5Domain = 3 + proxy_socks5IP6 = 4 +) + +var proxy_socks5Errors = []string{ + "", + "general failure", + "connection forbidden", + "network unreachable", + "host unreachable", + "connection refused", + "TTL expired", + "command not supported", + "address type not supported", +} + +// Dial connects to the address addr on the given network via the SOCKS5 proxy. +func (s *proxy_socks5) Dial(network, addr string) (net.Conn, error) { + switch network { + case "tcp", "tcp6", "tcp4": + default: + return nil, errors.New("proxy: no support for SOCKS5 proxy connections of type " + network) + } + + conn, err := s.forward.Dial(s.network, s.addr) + if err != nil { + return nil, err + } + if err := s.connect(conn, addr); err != nil { + conn.Close() + return nil, err + } + return conn, nil +} + +// connect takes an existing connection to a socks5 proxy server, +// and commands the server to extend that connection to target, +// which must be a canonical address with a host and port. +func (s *proxy_socks5) connect(conn net.Conn, target string) error { + host, portStr, err := net.SplitHostPort(target) + if err != nil { + return err + } + + port, err := strconv.Atoi(portStr) + if err != nil { + return errors.New("proxy: failed to parse port number: " + portStr) + } + if port < 1 || port > 0xffff { + return errors.New("proxy: port number out of range: " + portStr) + } + + // the size here is just an estimate + buf := make([]byte, 0, 6+len(host)) + + buf = append(buf, proxy_socks5Version) + if len(s.user) > 0 && len(s.user) < 256 && len(s.password) < 256 { + buf = append(buf, 2 /* num auth methods */, proxy_socks5AuthNone, proxy_socks5AuthPassword) + } else { + buf = append(buf, 1 /* num auth methods */, proxy_socks5AuthNone) + } + + if _, err := conn.Write(buf); err != nil { + return errors.New("proxy: failed to write greeting to SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + + if _, err := io.ReadFull(conn, buf[:2]); err != nil { + return errors.New("proxy: failed to read greeting from SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + if buf[0] != 5 { + return errors.New("proxy: SOCKS5 proxy at " + s.addr + " has unexpected version " + strconv.Itoa(int(buf[0]))) + } + if buf[1] == 0xff { + return errors.New("proxy: SOCKS5 proxy at " + s.addr + " requires authentication") + } + + // See RFC 1929 + if buf[1] == proxy_socks5AuthPassword { + buf = buf[:0] + buf = append(buf, 1 /* password protocol version */) + buf = append(buf, uint8(len(s.user))) + buf = append(buf, s.user...) + buf = append(buf, uint8(len(s.password))) + buf = append(buf, s.password...) + + if _, err := conn.Write(buf); err != nil { + return errors.New("proxy: failed to write authentication request to SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + + if _, err := io.ReadFull(conn, buf[:2]); err != nil { + return errors.New("proxy: failed to read authentication reply from SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + + if buf[1] != 0 { + return errors.New("proxy: SOCKS5 proxy at " + s.addr + " rejected username/password") + } + } + + buf = buf[:0] + buf = append(buf, proxy_socks5Version, proxy_socks5Connect, 0 /* reserved */) + + if ip := net.ParseIP(host); ip != nil { + if ip4 := ip.To4(); ip4 != nil { + buf = append(buf, proxy_socks5IP4) + ip = ip4 + } else { + buf = append(buf, proxy_socks5IP6) + } + buf = append(buf, ip...) + } else { + if len(host) > 255 { + return errors.New("proxy: destination host name too long: " + host) + } + buf = append(buf, proxy_socks5Domain) + buf = append(buf, byte(len(host))) + buf = append(buf, host...) + } + buf = append(buf, byte(port>>8), byte(port)) + + if _, err := conn.Write(buf); err != nil { + return errors.New("proxy: failed to write connect request to SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + + if _, err := io.ReadFull(conn, buf[:4]); err != nil { + return errors.New("proxy: failed to read connect reply from SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + + failure := "unknown error" + if int(buf[1]) < len(proxy_socks5Errors) { + failure = proxy_socks5Errors[buf[1]] + } + + if len(failure) > 0 { + return errors.New("proxy: SOCKS5 proxy at " + s.addr + " failed to connect: " + failure) + } + + bytesToDiscard := 0 + switch buf[3] { + case proxy_socks5IP4: + bytesToDiscard = net.IPv4len + case proxy_socks5IP6: + bytesToDiscard = net.IPv6len + case proxy_socks5Domain: + _, err := io.ReadFull(conn, buf[:1]) + if err != nil { + return errors.New("proxy: failed to read domain length from SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + bytesToDiscard = int(buf[0]) + default: + return errors.New("proxy: got unknown address type " + strconv.Itoa(int(buf[3])) + " from SOCKS5 proxy at " + s.addr) + } + + if cap(buf) < bytesToDiscard { + buf = make([]byte, bytesToDiscard) + } else { + buf = buf[:bytesToDiscard] + } + if _, err := io.ReadFull(conn, buf); err != nil { + return errors.New("proxy: failed to read address from SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + + // Also need to discard the port number + if _, err := io.ReadFull(conn, buf[:2]); err != nil { + return errors.New("proxy: failed to read port from SOCKS5 proxy at " + s.addr + ": " + err.Error()) + } + + return nil +} diff --git a/vendor/golang.org/x/xerrors/LICENSE b/vendor/golang.org/x/xerrors/LICENSE new file mode 100644 index 000000000000..e4a47e17f143 --- /dev/null +++ b/vendor/golang.org/x/xerrors/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2019 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/golang.org/x/xerrors/PATENTS b/vendor/golang.org/x/xerrors/PATENTS new file mode 100644 index 000000000000..733099041f84 --- /dev/null +++ b/vendor/golang.org/x/xerrors/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/vendor/golang.org/x/xerrors/README b/vendor/golang.org/x/xerrors/README new file mode 100644 index 000000000000..aac7867a560b --- /dev/null +++ b/vendor/golang.org/x/xerrors/README @@ -0,0 +1,2 @@ +This repository holds the transition packages for the new Go 1.13 error values. +See golang.org/design/29934-error-values. diff --git a/vendor/golang.org/x/xerrors/adaptor.go b/vendor/golang.org/x/xerrors/adaptor.go new file mode 100644 index 000000000000..4317f2483313 --- /dev/null +++ b/vendor/golang.org/x/xerrors/adaptor.go @@ -0,0 +1,193 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "bytes" + "fmt" + "io" + "reflect" + "strconv" +) + +// FormatError calls the FormatError method of f with an errors.Printer +// configured according to s and verb, and writes the result to s. +func FormatError(f Formatter, s fmt.State, verb rune) { + // Assuming this function is only called from the Format method, and given + // that FormatError takes precedence over Format, it cannot be called from + // any package that supports errors.Formatter. It is therefore safe to + // disregard that State may be a specific printer implementation and use one + // of our choice instead. + + // limitations: does not support printing error as Go struct. + + var ( + sep = " " // separator before next error + p = &state{State: s} + direct = true + ) + + var err error = f + + switch verb { + // Note that this switch must match the preference order + // for ordinary string printing (%#v before %+v, and so on). + + case 'v': + if s.Flag('#') { + if stringer, ok := err.(fmt.GoStringer); ok { + io.WriteString(&p.buf, stringer.GoString()) + goto exit + } + // proceed as if it were %v + } else if s.Flag('+') { + p.printDetail = true + sep = "\n - " + } + case 's': + case 'q', 'x', 'X': + // Use an intermediate buffer in the rare cases that precision, + // truncation, or one of the alternative verbs (q, x, and X) are + // specified. + direct = false + + default: + p.buf.WriteString("%!") + p.buf.WriteRune(verb) + p.buf.WriteByte('(') + switch { + case err != nil: + p.buf.WriteString(reflect.TypeOf(f).String()) + default: + p.buf.WriteString("") + } + p.buf.WriteByte(')') + io.Copy(s, &p.buf) + return + } + +loop: + for { + switch v := err.(type) { + case Formatter: + err = v.FormatError((*printer)(p)) + case fmt.Formatter: + v.Format(p, 'v') + break loop + default: + io.WriteString(&p.buf, v.Error()) + break loop + } + if err == nil { + break + } + if p.needColon || !p.printDetail { + p.buf.WriteByte(':') + p.needColon = false + } + p.buf.WriteString(sep) + p.inDetail = false + p.needNewline = false + } + +exit: + width, okW := s.Width() + prec, okP := s.Precision() + + if !direct || (okW && width > 0) || okP { + // Construct format string from State s. + format := []byte{'%'} + if s.Flag('-') { + format = append(format, '-') + } + if s.Flag('+') { + format = append(format, '+') + } + if s.Flag(' ') { + format = append(format, ' ') + } + if okW { + format = strconv.AppendInt(format, int64(width), 10) + } + if okP { + format = append(format, '.') + format = strconv.AppendInt(format, int64(prec), 10) + } + format = append(format, string(verb)...) + fmt.Fprintf(s, string(format), p.buf.String()) + } else { + io.Copy(s, &p.buf) + } +} + +var detailSep = []byte("\n ") + +// state tracks error printing state. It implements fmt.State. +type state struct { + fmt.State + buf bytes.Buffer + + printDetail bool + inDetail bool + needColon bool + needNewline bool +} + +func (s *state) Write(b []byte) (n int, err error) { + if s.printDetail { + if len(b) == 0 { + return 0, nil + } + if s.inDetail && s.needColon { + s.needNewline = true + if b[0] == '\n' { + b = b[1:] + } + } + k := 0 + for i, c := range b { + if s.needNewline { + if s.inDetail && s.needColon { + s.buf.WriteByte(':') + s.needColon = false + } + s.buf.Write(detailSep) + s.needNewline = false + } + if c == '\n' { + s.buf.Write(b[k:i]) + k = i + 1 + s.needNewline = true + } + } + s.buf.Write(b[k:]) + if !s.inDetail { + s.needColon = true + } + } else if !s.inDetail { + s.buf.Write(b) + } + return len(b), nil +} + +// printer wraps a state to implement an xerrors.Printer. +type printer state + +func (s *printer) Print(args ...interface{}) { + if !s.inDetail || s.printDetail { + fmt.Fprint((*state)(s), args...) + } +} + +func (s *printer) Printf(format string, args ...interface{}) { + if !s.inDetail || s.printDetail { + fmt.Fprintf((*state)(s), format, args...) + } +} + +func (s *printer) Detail() bool { + s.inDetail = true + return s.printDetail +} diff --git a/vendor/golang.org/x/xerrors/codereview.cfg b/vendor/golang.org/x/xerrors/codereview.cfg new file mode 100644 index 000000000000..3f8b14b64e83 --- /dev/null +++ b/vendor/golang.org/x/xerrors/codereview.cfg @@ -0,0 +1 @@ +issuerepo: golang/go diff --git a/vendor/golang.org/x/xerrors/doc.go b/vendor/golang.org/x/xerrors/doc.go new file mode 100644 index 000000000000..eef99d9d54d7 --- /dev/null +++ b/vendor/golang.org/x/xerrors/doc.go @@ -0,0 +1,22 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package xerrors implements functions to manipulate errors. +// +// This package is based on the Go 2 proposal for error values: +// https://golang.org/design/29934-error-values +// +// These functions were incorporated into the standard library's errors package +// in Go 1.13: +// - Is +// - As +// - Unwrap +// +// Also, Errorf's %w verb was incorporated into fmt.Errorf. +// +// Use this package to get equivalent behavior in all supported Go versions. +// +// No other features of this package were included in Go 1.13, and at present +// there are no plans to include any of them. +package xerrors // import "golang.org/x/xerrors" diff --git a/vendor/golang.org/x/xerrors/errors.go b/vendor/golang.org/x/xerrors/errors.go new file mode 100644 index 000000000000..e88d3772d861 --- /dev/null +++ b/vendor/golang.org/x/xerrors/errors.go @@ -0,0 +1,33 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import "fmt" + +// errorString is a trivial implementation of error. +type errorString struct { + s string + frame Frame +} + +// New returns an error that formats as the given text. +// +// The returned error contains a Frame set to the caller's location and +// implements Formatter to show this information when printed with details. +func New(text string) error { + return &errorString{text, Caller(1)} +} + +func (e *errorString) Error() string { + return e.s +} + +func (e *errorString) Format(s fmt.State, v rune) { FormatError(e, s, v) } + +func (e *errorString) FormatError(p Printer) (next error) { + p.Print(e.s) + e.frame.Format(p) + return nil +} diff --git a/vendor/golang.org/x/xerrors/fmt.go b/vendor/golang.org/x/xerrors/fmt.go new file mode 100644 index 000000000000..829862ddf6af --- /dev/null +++ b/vendor/golang.org/x/xerrors/fmt.go @@ -0,0 +1,187 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "fmt" + "strings" + "unicode" + "unicode/utf8" + + "golang.org/x/xerrors/internal" +) + +const percentBangString = "%!" + +// Errorf formats according to a format specifier and returns the string as a +// value that satisfies error. +// +// The returned error includes the file and line number of the caller when +// formatted with additional detail enabled. If the last argument is an error +// the returned error's Format method will return it if the format string ends +// with ": %s", ": %v", or ": %w". If the last argument is an error and the +// format string ends with ": %w", the returned error implements an Unwrap +// method returning it. +// +// If the format specifier includes a %w verb with an error operand in a +// position other than at the end, the returned error will still implement an +// Unwrap method returning the operand, but the error's Format method will not +// return the wrapped error. +// +// It is invalid to include more than one %w verb or to supply it with an +// operand that does not implement the error interface. The %w verb is otherwise +// a synonym for %v. +func Errorf(format string, a ...interface{}) error { + format = formatPlusW(format) + // Support a ": %[wsv]" suffix, which works well with xerrors.Formatter. + wrap := strings.HasSuffix(format, ": %w") + idx, format2, ok := parsePercentW(format) + percentWElsewhere := !wrap && idx >= 0 + if !percentWElsewhere && (wrap || strings.HasSuffix(format, ": %s") || strings.HasSuffix(format, ": %v")) { + err := errorAt(a, len(a)-1) + if err == nil { + return &noWrapError{fmt.Sprintf(format, a...), nil, Caller(1)} + } + // TODO: this is not entirely correct. The error value could be + // printed elsewhere in format if it mixes numbered with unnumbered + // substitutions. With relatively small changes to doPrintf we can + // have it optionally ignore extra arguments and pass the argument + // list in its entirety. + msg := fmt.Sprintf(format[:len(format)-len(": %s")], a[:len(a)-1]...) + frame := Frame{} + if internal.EnableTrace { + frame = Caller(1) + } + if wrap { + return &wrapError{msg, err, frame} + } + return &noWrapError{msg, err, frame} + } + // Support %w anywhere. + // TODO: don't repeat the wrapped error's message when %w occurs in the middle. + msg := fmt.Sprintf(format2, a...) + if idx < 0 { + return &noWrapError{msg, nil, Caller(1)} + } + err := errorAt(a, idx) + if !ok || err == nil { + // Too many %ws or argument of %w is not an error. Approximate the Go + // 1.13 fmt.Errorf message. + return &noWrapError{fmt.Sprintf("%sw(%s)", percentBangString, msg), nil, Caller(1)} + } + frame := Frame{} + if internal.EnableTrace { + frame = Caller(1) + } + return &wrapError{msg, err, frame} +} + +func errorAt(args []interface{}, i int) error { + if i < 0 || i >= len(args) { + return nil + } + err, ok := args[i].(error) + if !ok { + return nil + } + return err +} + +// formatPlusW is used to avoid the vet check that will barf at %w. +func formatPlusW(s string) string { + return s +} + +// Return the index of the only %w in format, or -1 if none. +// Also return a rewritten format string with %w replaced by %v, and +// false if there is more than one %w. +// TODO: handle "%[N]w". +func parsePercentW(format string) (idx int, newFormat string, ok bool) { + // Loosely copied from golang.org/x/tools/go/analysis/passes/printf/printf.go. + idx = -1 + ok = true + n := 0 + sz := 0 + var isW bool + for i := 0; i < len(format); i += sz { + if format[i] != '%' { + sz = 1 + continue + } + // "%%" is not a format directive. + if i+1 < len(format) && format[i+1] == '%' { + sz = 2 + continue + } + sz, isW = parsePrintfVerb(format[i:]) + if isW { + if idx >= 0 { + ok = false + } else { + idx = n + } + // "Replace" the last character, the 'w', with a 'v'. + p := i + sz - 1 + format = format[:p] + "v" + format[p+1:] + } + n++ + } + return idx, format, ok +} + +// Parse the printf verb starting with a % at s[0]. +// Return how many bytes it occupies and whether the verb is 'w'. +func parsePrintfVerb(s string) (int, bool) { + // Assume only that the directive is a sequence of non-letters followed by a single letter. + sz := 0 + var r rune + for i := 1; i < len(s); i += sz { + r, sz = utf8.DecodeRuneInString(s[i:]) + if unicode.IsLetter(r) { + return i + sz, r == 'w' + } + } + return len(s), false +} + +type noWrapError struct { + msg string + err error + frame Frame +} + +func (e *noWrapError) Error() string { + return fmt.Sprint(e) +} + +func (e *noWrapError) Format(s fmt.State, v rune) { FormatError(e, s, v) } + +func (e *noWrapError) FormatError(p Printer) (next error) { + p.Print(e.msg) + e.frame.Format(p) + return e.err +} + +type wrapError struct { + msg string + err error + frame Frame +} + +func (e *wrapError) Error() string { + return fmt.Sprint(e) +} + +func (e *wrapError) Format(s fmt.State, v rune) { FormatError(e, s, v) } + +func (e *wrapError) FormatError(p Printer) (next error) { + p.Print(e.msg) + e.frame.Format(p) + return e.err +} + +func (e *wrapError) Unwrap() error { + return e.err +} diff --git a/vendor/golang.org/x/xerrors/format.go b/vendor/golang.org/x/xerrors/format.go new file mode 100644 index 000000000000..1bc9c26b97fd --- /dev/null +++ b/vendor/golang.org/x/xerrors/format.go @@ -0,0 +1,34 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +// A Formatter formats error messages. +type Formatter interface { + error + + // FormatError prints the receiver's first error and returns the next error in + // the error chain, if any. + FormatError(p Printer) (next error) +} + +// A Printer formats error messages. +// +// The most common implementation of Printer is the one provided by package fmt +// during Printf (as of Go 1.13). Localization packages such as golang.org/x/text/message +// typically provide their own implementations. +type Printer interface { + // Print appends args to the message output. + Print(args ...interface{}) + + // Printf writes a formatted string. + Printf(format string, args ...interface{}) + + // Detail reports whether error detail is requested. + // After the first call to Detail, all text written to the Printer + // is formatted as additional detail, or ignored when + // detail has not been requested. + // If Detail returns false, the caller can avoid printing the detail at all. + Detail() bool +} diff --git a/vendor/golang.org/x/xerrors/frame.go b/vendor/golang.org/x/xerrors/frame.go new file mode 100644 index 000000000000..0de628ec501f --- /dev/null +++ b/vendor/golang.org/x/xerrors/frame.go @@ -0,0 +1,56 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "runtime" +) + +// A Frame contains part of a call stack. +type Frame struct { + // Make room for three PCs: the one we were asked for, what it called, + // and possibly a PC for skipPleaseUseCallersFrames. See: + // https://go.googlesource.com/go/+/032678e0fb/src/runtime/extern.go#169 + frames [3]uintptr +} + +// Caller returns a Frame that describes a frame on the caller's stack. +// The argument skip is the number of frames to skip over. +// Caller(0) returns the frame for the caller of Caller. +func Caller(skip int) Frame { + var s Frame + runtime.Callers(skip+1, s.frames[:]) + return s +} + +// location reports the file, line, and function of a frame. +// +// The returned function may be "" even if file and line are not. +func (f Frame) location() (function, file string, line int) { + frames := runtime.CallersFrames(f.frames[:]) + if _, ok := frames.Next(); !ok { + return "", "", 0 + } + fr, ok := frames.Next() + if !ok { + return "", "", 0 + } + return fr.Function, fr.File, fr.Line +} + +// Format prints the stack as error detail. +// It should be called from an error's Format implementation +// after printing any other error detail. +func (f Frame) Format(p Printer) { + if p.Detail() { + function, file, line := f.location() + if function != "" { + p.Printf("%s\n ", function) + } + if file != "" { + p.Printf("%s:%d\n", file, line) + } + } +} diff --git a/vendor/golang.org/x/xerrors/go.mod b/vendor/golang.org/x/xerrors/go.mod new file mode 100644 index 000000000000..870d4f612dbf --- /dev/null +++ b/vendor/golang.org/x/xerrors/go.mod @@ -0,0 +1,3 @@ +module golang.org/x/xerrors + +go 1.11 diff --git a/vendor/golang.org/x/xerrors/internal/internal.go b/vendor/golang.org/x/xerrors/internal/internal.go new file mode 100644 index 000000000000..89f4eca5df7b --- /dev/null +++ b/vendor/golang.org/x/xerrors/internal/internal.go @@ -0,0 +1,8 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package internal + +// EnableTrace indicates whether stack information should be recorded in errors. +var EnableTrace = true diff --git a/vendor/golang.org/x/xerrors/wrap.go b/vendor/golang.org/x/xerrors/wrap.go new file mode 100644 index 000000000000..9a3b510374ec --- /dev/null +++ b/vendor/golang.org/x/xerrors/wrap.go @@ -0,0 +1,106 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package xerrors + +import ( + "reflect" +) + +// A Wrapper provides context around another error. +type Wrapper interface { + // Unwrap returns the next error in the error chain. + // If there is no next error, Unwrap returns nil. + Unwrap() error +} + +// Opaque returns an error with the same error formatting as err +// but that does not match err and cannot be unwrapped. +func Opaque(err error) error { + return noWrapper{err} +} + +type noWrapper struct { + error +} + +func (e noWrapper) FormatError(p Printer) (next error) { + if f, ok := e.error.(Formatter); ok { + return f.FormatError(p) + } + p.Print(e.error) + return nil +} + +// Unwrap returns the result of calling the Unwrap method on err, if err implements +// Unwrap. Otherwise, Unwrap returns nil. +func Unwrap(err error) error { + u, ok := err.(Wrapper) + if !ok { + return nil + } + return u.Unwrap() +} + +// Is reports whether any error in err's chain matches target. +// +// An error is considered to match a target if it is equal to that target or if +// it implements a method Is(error) bool such that Is(target) returns true. +func Is(err, target error) bool { + if target == nil { + return err == target + } + + isComparable := reflect.TypeOf(target).Comparable() + for { + if isComparable && err == target { + return true + } + if x, ok := err.(interface{ Is(error) bool }); ok && x.Is(target) { + return true + } + // TODO: consider supporing target.Is(err). This would allow + // user-definable predicates, but also may allow for coping with sloppy + // APIs, thereby making it easier to get away with them. + if err = Unwrap(err); err == nil { + return false + } + } +} + +// As finds the first error in err's chain that matches the type to which target +// points, and if so, sets the target to its value and returns true. An error +// matches a type if it is assignable to the target type, or if it has a method +// As(interface{}) bool such that As(target) returns true. As will panic if target +// is not a non-nil pointer to a type which implements error or is of interface type. +// +// The As method should set the target to its value and return true if err +// matches the type to which target points. +func As(err error, target interface{}) bool { + if target == nil { + panic("errors: target cannot be nil") + } + val := reflect.ValueOf(target) + typ := val.Type() + if typ.Kind() != reflect.Ptr || val.IsNil() { + panic("errors: target must be a non-nil pointer") + } + if e := typ.Elem(); e.Kind() != reflect.Interface && !e.Implements(errorType) { + panic("errors: *target must be interface or implement error") + } + targetType := typ.Elem() + for err != nil { + if reflect.TypeOf(err).AssignableTo(targetType) { + val.Elem().Set(reflect.ValueOf(err)) + return true + } + if x, ok := err.(interface{ As(interface{}) bool }); ok && x.As(target) { + return true + } + err = Unwrap(err) + } + return false +} + +var errorType = reflect.TypeOf((*error)(nil)).Elem() diff --git a/vendor/google.golang.org/appengine/datastore/datastore.go b/vendor/google.golang.org/appengine/datastore/datastore.go new file mode 100644 index 000000000000..576bc50132aa --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/datastore.go @@ -0,0 +1,407 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "errors" + "fmt" + "reflect" + + "github.com/golang/protobuf/proto" + "golang.org/x/net/context" + + "google.golang.org/appengine" + "google.golang.org/appengine/internal" + pb "google.golang.org/appengine/internal/datastore" +) + +var ( + // ErrInvalidEntityType is returned when functions like Get or Next are + // passed a dst or src argument of invalid type. + ErrInvalidEntityType = errors.New("datastore: invalid entity type") + // ErrInvalidKey is returned when an invalid key is presented. + ErrInvalidKey = errors.New("datastore: invalid key") + // ErrNoSuchEntity is returned when no entity was found for a given key. + ErrNoSuchEntity = errors.New("datastore: no such entity") +) + +// ErrFieldMismatch is returned when a field is to be loaded into a different +// type than the one it was stored from, or when a field is missing or +// unexported in the destination struct. +// StructType is the type of the struct pointed to by the destination argument +// passed to Get or to Iterator.Next. +type ErrFieldMismatch struct { + StructType reflect.Type + FieldName string + Reason string +} + +func (e *ErrFieldMismatch) Error() string { + return fmt.Sprintf("datastore: cannot load field %q into a %q: %s", + e.FieldName, e.StructType, e.Reason) +} + +// protoToKey converts a Reference proto to a *Key. If the key is invalid, +// protoToKey will return the invalid key along with ErrInvalidKey. +func protoToKey(r *pb.Reference) (k *Key, err error) { + appID := r.GetApp() + namespace := r.GetNameSpace() + for _, e := range r.Path.Element { + k = &Key{ + kind: e.GetType(), + stringID: e.GetName(), + intID: e.GetId(), + parent: k, + appID: appID, + namespace: namespace, + } + if !k.valid() { + return k, ErrInvalidKey + } + } + return +} + +// keyToProto converts a *Key to a Reference proto. +func keyToProto(defaultAppID string, k *Key) *pb.Reference { + appID := k.appID + if appID == "" { + appID = defaultAppID + } + n := 0 + for i := k; i != nil; i = i.parent { + n++ + } + e := make([]*pb.Path_Element, n) + for i := k; i != nil; i = i.parent { + n-- + e[n] = &pb.Path_Element{ + Type: &i.kind, + } + // At most one of {Name,Id} should be set. + // Neither will be set for incomplete keys. + if i.stringID != "" { + e[n].Name = &i.stringID + } else if i.intID != 0 { + e[n].Id = &i.intID + } + } + var namespace *string + if k.namespace != "" { + namespace = proto.String(k.namespace) + } + return &pb.Reference{ + App: proto.String(appID), + NameSpace: namespace, + Path: &pb.Path{ + Element: e, + }, + } +} + +// multiKeyToProto is a batch version of keyToProto. +func multiKeyToProto(appID string, key []*Key) []*pb.Reference { + ret := make([]*pb.Reference, len(key)) + for i, k := range key { + ret[i] = keyToProto(appID, k) + } + return ret +} + +// multiValid is a batch version of Key.valid. It returns an error, not a +// []bool. +func multiValid(key []*Key) error { + invalid := false + for _, k := range key { + if !k.valid() { + invalid = true + break + } + } + if !invalid { + return nil + } + err := make(appengine.MultiError, len(key)) + for i, k := range key { + if !k.valid() { + err[i] = ErrInvalidKey + } + } + return err +} + +// It's unfortunate that the two semantically equivalent concepts pb.Reference +// and pb.PropertyValue_ReferenceValue aren't the same type. For example, the +// two have different protobuf field numbers. + +// referenceValueToKey is the same as protoToKey except the input is a +// PropertyValue_ReferenceValue instead of a Reference. +func referenceValueToKey(r *pb.PropertyValue_ReferenceValue) (k *Key, err error) { + appID := r.GetApp() + namespace := r.GetNameSpace() + for _, e := range r.Pathelement { + k = &Key{ + kind: e.GetType(), + stringID: e.GetName(), + intID: e.GetId(), + parent: k, + appID: appID, + namespace: namespace, + } + if !k.valid() { + return nil, ErrInvalidKey + } + } + return +} + +// keyToReferenceValue is the same as keyToProto except the output is a +// PropertyValue_ReferenceValue instead of a Reference. +func keyToReferenceValue(defaultAppID string, k *Key) *pb.PropertyValue_ReferenceValue { + ref := keyToProto(defaultAppID, k) + pe := make([]*pb.PropertyValue_ReferenceValue_PathElement, len(ref.Path.Element)) + for i, e := range ref.Path.Element { + pe[i] = &pb.PropertyValue_ReferenceValue_PathElement{ + Type: e.Type, + Id: e.Id, + Name: e.Name, + } + } + return &pb.PropertyValue_ReferenceValue{ + App: ref.App, + NameSpace: ref.NameSpace, + Pathelement: pe, + } +} + +type multiArgType int + +const ( + multiArgTypeInvalid multiArgType = iota + multiArgTypePropertyLoadSaver + multiArgTypeStruct + multiArgTypeStructPtr + multiArgTypeInterface +) + +// checkMultiArg checks that v has type []S, []*S, []I, or []P, for some struct +// type S, for some interface type I, or some non-interface non-pointer type P +// such that P or *P implements PropertyLoadSaver. +// +// It returns what category the slice's elements are, and the reflect.Type +// that represents S, I or P. +// +// As a special case, PropertyList is an invalid type for v. +func checkMultiArg(v reflect.Value) (m multiArgType, elemType reflect.Type) { + if v.Kind() != reflect.Slice { + return multiArgTypeInvalid, nil + } + if v.Type() == typeOfPropertyList { + return multiArgTypeInvalid, nil + } + elemType = v.Type().Elem() + if reflect.PtrTo(elemType).Implements(typeOfPropertyLoadSaver) { + return multiArgTypePropertyLoadSaver, elemType + } + switch elemType.Kind() { + case reflect.Struct: + return multiArgTypeStruct, elemType + case reflect.Interface: + return multiArgTypeInterface, elemType + case reflect.Ptr: + elemType = elemType.Elem() + if elemType.Kind() == reflect.Struct { + return multiArgTypeStructPtr, elemType + } + } + return multiArgTypeInvalid, nil +} + +// Get loads the entity stored for k into dst, which must be a struct pointer +// or implement PropertyLoadSaver. If there is no such entity for the key, Get +// returns ErrNoSuchEntity. +// +// The values of dst's unmatched struct fields are not modified, and matching +// slice-typed fields are not reset before appending to them. In particular, it +// is recommended to pass a pointer to a zero valued struct on each Get call. +// +// ErrFieldMismatch is returned when a field is to be loaded into a different +// type than the one it was stored from, or when a field is missing or +// unexported in the destination struct. ErrFieldMismatch is only returned if +// dst is a struct pointer. +func Get(c context.Context, key *Key, dst interface{}) error { + if dst == nil { // GetMulti catches nil interface; we need to catch nil ptr here + return ErrInvalidEntityType + } + err := GetMulti(c, []*Key{key}, []interface{}{dst}) + if me, ok := err.(appengine.MultiError); ok { + return me[0] + } + return err +} + +// GetMulti is a batch version of Get. +// +// dst must be a []S, []*S, []I or []P, for some struct type S, some interface +// type I, or some non-interface non-pointer type P such that P or *P +// implements PropertyLoadSaver. If an []I, each element must be a valid dst +// for Get: it must be a struct pointer or implement PropertyLoadSaver. +// +// As a special case, PropertyList is an invalid type for dst, even though a +// PropertyList is a slice of structs. It is treated as invalid to avoid being +// mistakenly passed when []PropertyList was intended. +func GetMulti(c context.Context, key []*Key, dst interface{}) error { + v := reflect.ValueOf(dst) + multiArgType, _ := checkMultiArg(v) + if multiArgType == multiArgTypeInvalid { + return errors.New("datastore: dst has invalid type") + } + if len(key) != v.Len() { + return errors.New("datastore: key and dst slices have different length") + } + if len(key) == 0 { + return nil + } + if err := multiValid(key); err != nil { + return err + } + req := &pb.GetRequest{ + Key: multiKeyToProto(internal.FullyQualifiedAppID(c), key), + } + res := &pb.GetResponse{} + if err := internal.Call(c, "datastore_v3", "Get", req, res); err != nil { + return err + } + if len(key) != len(res.Entity) { + return errors.New("datastore: internal error: server returned the wrong number of entities") + } + multiErr, any := make(appengine.MultiError, len(key)), false + for i, e := range res.Entity { + if e.Entity == nil { + multiErr[i] = ErrNoSuchEntity + } else { + elem := v.Index(i) + if multiArgType == multiArgTypePropertyLoadSaver || multiArgType == multiArgTypeStruct { + elem = elem.Addr() + } + if multiArgType == multiArgTypeStructPtr && elem.IsNil() { + elem.Set(reflect.New(elem.Type().Elem())) + } + multiErr[i] = loadEntity(elem.Interface(), e.Entity) + } + if multiErr[i] != nil { + any = true + } + } + if any { + return multiErr + } + return nil +} + +// Put saves the entity src into the datastore with key k. src must be a struct +// pointer or implement PropertyLoadSaver; if a struct pointer then any +// unexported fields of that struct will be skipped. If k is an incomplete key, +// the returned key will be a unique key generated by the datastore. +func Put(c context.Context, key *Key, src interface{}) (*Key, error) { + k, err := PutMulti(c, []*Key{key}, []interface{}{src}) + if err != nil { + if me, ok := err.(appengine.MultiError); ok { + return nil, me[0] + } + return nil, err + } + return k[0], nil +} + +// PutMulti is a batch version of Put. +// +// src must satisfy the same conditions as the dst argument to GetMulti. +func PutMulti(c context.Context, key []*Key, src interface{}) ([]*Key, error) { + v := reflect.ValueOf(src) + multiArgType, _ := checkMultiArg(v) + if multiArgType == multiArgTypeInvalid { + return nil, errors.New("datastore: src has invalid type") + } + if len(key) != v.Len() { + return nil, errors.New("datastore: key and src slices have different length") + } + if len(key) == 0 { + return nil, nil + } + appID := internal.FullyQualifiedAppID(c) + if err := multiValid(key); err != nil { + return nil, err + } + req := &pb.PutRequest{} + for i := range key { + elem := v.Index(i) + if multiArgType == multiArgTypePropertyLoadSaver || multiArgType == multiArgTypeStruct { + elem = elem.Addr() + } + sProto, err := saveEntity(appID, key[i], elem.Interface()) + if err != nil { + return nil, err + } + req.Entity = append(req.Entity, sProto) + } + res := &pb.PutResponse{} + if err := internal.Call(c, "datastore_v3", "Put", req, res); err != nil { + return nil, err + } + if len(key) != len(res.Key) { + return nil, errors.New("datastore: internal error: server returned the wrong number of keys") + } + ret := make([]*Key, len(key)) + for i := range ret { + var err error + ret[i], err = protoToKey(res.Key[i]) + if err != nil || ret[i].Incomplete() { + return nil, errors.New("datastore: internal error: server returned an invalid key") + } + } + return ret, nil +} + +// Delete deletes the entity for the given key. +func Delete(c context.Context, key *Key) error { + err := DeleteMulti(c, []*Key{key}) + if me, ok := err.(appengine.MultiError); ok { + return me[0] + } + return err +} + +// DeleteMulti is a batch version of Delete. +func DeleteMulti(c context.Context, key []*Key) error { + if len(key) == 0 { + return nil + } + if err := multiValid(key); err != nil { + return err + } + req := &pb.DeleteRequest{ + Key: multiKeyToProto(internal.FullyQualifiedAppID(c), key), + } + res := &pb.DeleteResponse{} + return internal.Call(c, "datastore_v3", "Delete", req, res) +} + +func namespaceMod(m proto.Message, namespace string) { + // pb.Query is the only type that has a name_space field. + // All other namespace support in datastore is in the keys. + switch m := m.(type) { + case *pb.Query: + if m.NameSpace == nil { + m.NameSpace = &namespace + } + } +} + +func init() { + internal.NamespaceMods["datastore_v3"] = namespaceMod + internal.RegisterErrorCodeMap("datastore_v3", pb.Error_ErrorCode_name) + internal.RegisterTimeoutErrorCode("datastore_v3", int32(pb.Error_TIMEOUT)) +} diff --git a/vendor/google.golang.org/appengine/datastore/doc.go b/vendor/google.golang.org/appengine/datastore/doc.go new file mode 100644 index 000000000000..85616cf27410 --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/doc.go @@ -0,0 +1,361 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +/* +Package datastore provides a client for App Engine's datastore service. + + +Basic Operations + +Entities are the unit of storage and are associated with a key. A key +consists of an optional parent key, a string application ID, a string kind +(also known as an entity type), and either a StringID or an IntID. A +StringID is also known as an entity name or key name. + +It is valid to create a key with a zero StringID and a zero IntID; this is +called an incomplete key, and does not refer to any saved entity. Putting an +entity into the datastore under an incomplete key will cause a unique key +to be generated for that entity, with a non-zero IntID. + +An entity's contents are a mapping from case-sensitive field names to values. +Valid value types are: + - signed integers (int, int8, int16, int32 and int64), + - bool, + - string, + - float32 and float64, + - []byte (up to 1 megabyte in length), + - any type whose underlying type is one of the above predeclared types, + - ByteString, + - *Key, + - time.Time (stored with microsecond precision), + - appengine.BlobKey, + - appengine.GeoPoint, + - structs whose fields are all valid value types, + - slices of any of the above. + +Slices of structs are valid, as are structs that contain slices. However, if +one struct contains another, then at most one of those can be repeated. This +disqualifies recursively defined struct types: any struct T that (directly or +indirectly) contains a []T. + +The Get and Put functions load and save an entity's contents. An entity's +contents are typically represented by a struct pointer. + +Example code: + + type Entity struct { + Value string + } + + func handle(w http.ResponseWriter, r *http.Request) { + ctx := appengine.NewContext(r) + + k := datastore.NewKey(ctx, "Entity", "stringID", 0, nil) + e := new(Entity) + if err := datastore.Get(ctx, k, e); err != nil { + http.Error(w, err.Error(), 500) + return + } + + old := e.Value + e.Value = r.URL.Path + + if _, err := datastore.Put(ctx, k, e); err != nil { + http.Error(w, err.Error(), 500) + return + } + + w.Header().Set("Content-Type", "text/plain; charset=utf-8") + fmt.Fprintf(w, "old=%q\nnew=%q\n", old, e.Value) + } + +GetMulti, PutMulti and DeleteMulti are batch versions of the Get, Put and +Delete functions. They take a []*Key instead of a *Key, and may return an +appengine.MultiError when encountering partial failure. + + +Properties + +An entity's contents can be represented by a variety of types. These are +typically struct pointers, but can also be any type that implements the +PropertyLoadSaver interface. If using a struct pointer, you do not have to +explicitly implement the PropertyLoadSaver interface; the datastore will +automatically convert via reflection. If a struct pointer does implement that +interface then those methods will be used in preference to the default +behavior for struct pointers. Struct pointers are more strongly typed and are +easier to use; PropertyLoadSavers are more flexible. + +The actual types passed do not have to match between Get and Put calls or even +across different calls to datastore. It is valid to put a *PropertyList and +get that same entity as a *myStruct, or put a *myStruct0 and get a *myStruct1. +Conceptually, any entity is saved as a sequence of properties, and is loaded +into the destination value on a property-by-property basis. When loading into +a struct pointer, an entity that cannot be completely represented (such as a +missing field) will result in an ErrFieldMismatch error but it is up to the +caller whether this error is fatal, recoverable or ignorable. + +By default, for struct pointers, all properties are potentially indexed, and +the property name is the same as the field name (and hence must start with an +upper case letter). + +Fields may have a `datastore:"name,options"` tag. The tag name is the +property name, which must be one or more valid Go identifiers joined by ".", +but may start with a lower case letter. An empty tag name means to just use the +field name. A "-" tag name means that the datastore will ignore that field. + +The only valid options are "omitempty" and "noindex". + +If the options include "omitempty" and the value of the field is empty, then the field will be omitted on Save. +The empty values are false, 0, any nil interface value, and any array, slice, map, or string of length zero. +Struct field values will never be empty. + +If options include "noindex" then the field will not be indexed. All fields are indexed +by default. Strings or byte slices longer than 1500 bytes cannot be indexed; +fields used to store long strings and byte slices must be tagged with "noindex" +or they will cause Put operations to fail. + +To use multiple options together, separate them by a comma. +The order does not matter. + +If the options is "" then the comma may be omitted. + +Example code: + + // A and B are renamed to a and b. + // A, C and J are not indexed. + // D's tag is equivalent to having no tag at all (E). + // I is ignored entirely by the datastore. + // J has tag information for both the datastore and json packages. + type TaggedStruct struct { + A int `datastore:"a,noindex"` + B int `datastore:"b"` + C int `datastore:",noindex"` + D int `datastore:""` + E int + I int `datastore:"-"` + J int `datastore:",noindex" json:"j"` + } + + +Structured Properties + +If the struct pointed to contains other structs, then the nested or embedded +structs are flattened. For example, given these definitions: + + type Inner1 struct { + W int32 + X string + } + + type Inner2 struct { + Y float64 + } + + type Inner3 struct { + Z bool + } + + type Outer struct { + A int16 + I []Inner1 + J Inner2 + Inner3 + } + +then an Outer's properties would be equivalent to those of: + + type OuterEquivalent struct { + A int16 + IDotW []int32 `datastore:"I.W"` + IDotX []string `datastore:"I.X"` + JDotY float64 `datastore:"J.Y"` + Z bool + } + +If Outer's embedded Inner3 field was tagged as `datastore:"Foo"` then the +equivalent field would instead be: FooDotZ bool `datastore:"Foo.Z"`. + +If an outer struct is tagged "noindex" then all of its implicit flattened +fields are effectively "noindex". + + +The PropertyLoadSaver Interface + +An entity's contents can also be represented by any type that implements the +PropertyLoadSaver interface. This type may be a struct pointer, but it does +not have to be. The datastore package will call Load when getting the entity's +contents, and Save when putting the entity's contents. +Possible uses include deriving non-stored fields, verifying fields, or indexing +a field only if its value is positive. + +Example code: + + type CustomPropsExample struct { + I, J int + // Sum is not stored, but should always be equal to I + J. + Sum int `datastore:"-"` + } + + func (x *CustomPropsExample) Load(ps []datastore.Property) error { + // Load I and J as usual. + if err := datastore.LoadStruct(x, ps); err != nil { + return err + } + // Derive the Sum field. + x.Sum = x.I + x.J + return nil + } + + func (x *CustomPropsExample) Save() ([]datastore.Property, error) { + // Validate the Sum field. + if x.Sum != x.I + x.J { + return nil, errors.New("CustomPropsExample has inconsistent sum") + } + // Save I and J as usual. The code below is equivalent to calling + // "return datastore.SaveStruct(x)", but is done manually for + // demonstration purposes. + return []datastore.Property{ + { + Name: "I", + Value: int64(x.I), + }, + { + Name: "J", + Value: int64(x.J), + }, + }, nil + } + +The *PropertyList type implements PropertyLoadSaver, and can therefore hold an +arbitrary entity's contents. + + +Queries + +Queries retrieve entities based on their properties or key's ancestry. Running +a query yields an iterator of results: either keys or (key, entity) pairs. +Queries are re-usable and it is safe to call Query.Run from concurrent +goroutines. Iterators are not safe for concurrent use. + +Queries are immutable, and are either created by calling NewQuery, or derived +from an existing query by calling a method like Filter or Order that returns a +new query value. A query is typically constructed by calling NewQuery followed +by a chain of zero or more such methods. These methods are: + - Ancestor and Filter constrain the entities returned by running a query. + - Order affects the order in which they are returned. + - Project constrains the fields returned. + - Distinct de-duplicates projected entities. + - KeysOnly makes the iterator return only keys, not (key, entity) pairs. + - Start, End, Offset and Limit define which sub-sequence of matching entities + to return. Start and End take cursors, Offset and Limit take integers. Start + and Offset affect the first result, End and Limit affect the last result. + If both Start and Offset are set, then the offset is relative to Start. + If both End and Limit are set, then the earliest constraint wins. Limit is + relative to Start+Offset, not relative to End. As a special case, a + negative limit means unlimited. + +Example code: + + type Widget struct { + Description string + Price int + } + + func handle(w http.ResponseWriter, r *http.Request) { + ctx := appengine.NewContext(r) + q := datastore.NewQuery("Widget"). + Filter("Price <", 1000). + Order("-Price") + b := new(bytes.Buffer) + for t := q.Run(ctx); ; { + var x Widget + key, err := t.Next(&x) + if err == datastore.Done { + break + } + if err != nil { + serveError(ctx, w, err) + return + } + fmt.Fprintf(b, "Key=%v\nWidget=%#v\n\n", key, x) + } + w.Header().Set("Content-Type", "text/plain; charset=utf-8") + io.Copy(w, b) + } + + +Transactions + +RunInTransaction runs a function in a transaction. + +Example code: + + type Counter struct { + Count int + } + + func inc(ctx context.Context, key *datastore.Key) (int, error) { + var x Counter + if err := datastore.Get(ctx, key, &x); err != nil && err != datastore.ErrNoSuchEntity { + return 0, err + } + x.Count++ + if _, err := datastore.Put(ctx, key, &x); err != nil { + return 0, err + } + return x.Count, nil + } + + func handle(w http.ResponseWriter, r *http.Request) { + ctx := appengine.NewContext(r) + var count int + err := datastore.RunInTransaction(ctx, func(ctx context.Context) error { + var err1 error + count, err1 = inc(ctx, datastore.NewKey(ctx, "Counter", "singleton", 0, nil)) + return err1 + }, nil) + if err != nil { + serveError(ctx, w, err) + return + } + w.Header().Set("Content-Type", "text/plain; charset=utf-8") + fmt.Fprintf(w, "Count=%d", count) + } + + +Metadata + +The datastore package provides access to some of App Engine's datastore +metadata. This metadata includes information about the entity groups, +namespaces, entity kinds, and properties in the datastore, as well as the +property representations for each property. + +Example code: + + func handle(w http.ResponseWriter, r *http.Request) { + // Print all the kinds in the datastore, with all the indexed + // properties (and their representations) for each. + ctx := appengine.NewContext(r) + + kinds, err := datastore.Kinds(ctx) + if err != nil { + serveError(ctx, w, err) + return + } + + w.Header().Set("Content-Type", "text/plain; charset=utf-8") + for _, kind := range kinds { + fmt.Fprintf(w, "%s:\n", kind) + props, err := datastore.KindProperties(ctx, kind) + if err != nil { + fmt.Fprintln(w, "\t(unable to retrieve properties)") + continue + } + for p, rep := range props { + fmt.Fprintf(w, "\t-%s (%s)\n", p, strings.Join(rep, ", ")) + } + } + } +*/ +package datastore // import "google.golang.org/appengine/datastore" diff --git a/vendor/google.golang.org/appengine/datastore/internal/cloudkey/cloudkey.go b/vendor/google.golang.org/appengine/datastore/internal/cloudkey/cloudkey.go new file mode 100644 index 000000000000..643d4049c6b3 --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/internal/cloudkey/cloudkey.go @@ -0,0 +1,120 @@ +// Copyright 2019 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +// Package cloudpb is a subset of types and functions, copied from cloud.google.com/go/datastore. +// +// They are copied here to provide compatibility to decode keys generated by the cloud.google.com/go/datastore package. +package cloudkey + +import ( + "encoding/base64" + "errors" + "strings" + + "github.com/golang/protobuf/proto" + cloudpb "google.golang.org/appengine/datastore/internal/cloudpb" +) + +///////////////////////////////////////////////////////////////////// +// Code below is copied from https://github.com/googleapis/google-cloud-go/blob/master/datastore/datastore.go +///////////////////////////////////////////////////////////////////// + +var ( + // ErrInvalidKey is returned when an invalid key is presented. + ErrInvalidKey = errors.New("datastore: invalid key") +) + +///////////////////////////////////////////////////////////////////// +// Code below is copied from https://github.com/googleapis/google-cloud-go/blob/master/datastore/key.go +///////////////////////////////////////////////////////////////////// + +// Key represents the datastore key for a stored entity. +type Key struct { + // Kind cannot be empty. + Kind string + // Either ID or Name must be zero for the Key to be valid. + // If both are zero, the Key is incomplete. + ID int64 + Name string + // Parent must either be a complete Key or nil. + Parent *Key + + // Namespace provides the ability to partition your data for multiple + // tenants. In most cases, it is not necessary to specify a namespace. + // See docs on datastore multitenancy for details: + // https://cloud.google.com/datastore/docs/concepts/multitenancy + Namespace string +} + +// DecodeKey decodes a key from the opaque representation returned by Encode. +func DecodeKey(encoded string) (*Key, error) { + // Re-add padding. + if m := len(encoded) % 4; m != 0 { + encoded += strings.Repeat("=", 4-m) + } + + b, err := base64.URLEncoding.DecodeString(encoded) + if err != nil { + return nil, err + } + + pKey := new(cloudpb.Key) + if err := proto.Unmarshal(b, pKey); err != nil { + return nil, err + } + return protoToKey(pKey) +} + +// valid returns whether the key is valid. +func (k *Key) valid() bool { + if k == nil { + return false + } + for ; k != nil; k = k.Parent { + if k.Kind == "" { + return false + } + if k.Name != "" && k.ID != 0 { + return false + } + if k.Parent != nil { + if k.Parent.Incomplete() { + return false + } + if k.Parent.Namespace != k.Namespace { + return false + } + } + } + return true +} + +// Incomplete reports whether the key does not refer to a stored entity. +func (k *Key) Incomplete() bool { + return k.Name == "" && k.ID == 0 +} + +// protoToKey decodes a protocol buffer representation of a key into an +// equivalent *Key object. If the key is invalid, protoToKey will return the +// invalid key along with ErrInvalidKey. +func protoToKey(p *cloudpb.Key) (*Key, error) { + var key *Key + var namespace string + if partition := p.PartitionId; partition != nil { + namespace = partition.NamespaceId + } + for _, el := range p.Path { + key = &Key{ + Namespace: namespace, + Kind: el.Kind, + ID: el.GetId(), + Name: el.GetName(), + Parent: key, + } + } + if !key.valid() { // Also detects key == nil. + return key, ErrInvalidKey + } + return key, nil +} diff --git a/vendor/google.golang.org/appengine/datastore/internal/cloudpb/entity.pb.go b/vendor/google.golang.org/appengine/datastore/internal/cloudpb/entity.pb.go new file mode 100644 index 000000000000..af8195f3f8d2 --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/internal/cloudpb/entity.pb.go @@ -0,0 +1,344 @@ +// Copyright 2019 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +// Package cloudpb is a subset of protobufs, copied from google.golang.org/genproto/googleapis/datastore/v1. +// +// They are copied here to provide compatibility to decode keys generated by the cloud.google.com/go/datastore package. +package cloudpb + +import ( + "fmt" + + "github.com/golang/protobuf/proto" +) + +// A partition ID identifies a grouping of entities. The grouping is always +// by project and namespace, however the namespace ID may be empty. +// +// A partition ID contains several dimensions: +// project ID and namespace ID. +// +// Partition dimensions: +// +// - May be `""`. +// - Must be valid UTF-8 bytes. +// - Must have values that match regex `[A-Za-z\d\.\-_]{1,100}` +// If the value of any dimension matches regex `__.*__`, the partition is +// reserved/read-only. +// A reserved/read-only partition ID is forbidden in certain documented +// contexts. +// +// Foreign partition IDs (in which the project ID does +// not match the context project ID ) are discouraged. +// Reads and writes of foreign partition IDs may fail if the project is not in +// an active state. +type PartitionId struct { + // The ID of the project to which the entities belong. + ProjectId string `protobuf:"bytes,2,opt,name=project_id,json=projectId,proto3" json:"project_id,omitempty"` + // If not empty, the ID of the namespace to which the entities belong. + NamespaceId string `protobuf:"bytes,4,opt,name=namespace_id,json=namespaceId,proto3" json:"namespace_id,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *PartitionId) Reset() { *m = PartitionId{} } +func (m *PartitionId) String() string { return proto.CompactTextString(m) } +func (*PartitionId) ProtoMessage() {} +func (*PartitionId) Descriptor() ([]byte, []int) { + return fileDescriptor_entity_096a297364b049a5, []int{0} +} +func (m *PartitionId) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_PartitionId.Unmarshal(m, b) +} +func (m *PartitionId) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_PartitionId.Marshal(b, m, deterministic) +} +func (dst *PartitionId) XXX_Merge(src proto.Message) { + xxx_messageInfo_PartitionId.Merge(dst, src) +} +func (m *PartitionId) XXX_Size() int { + return xxx_messageInfo_PartitionId.Size(m) +} +func (m *PartitionId) XXX_DiscardUnknown() { + xxx_messageInfo_PartitionId.DiscardUnknown(m) +} + +var xxx_messageInfo_PartitionId proto.InternalMessageInfo + +func (m *PartitionId) GetProjectId() string { + if m != nil { + return m.ProjectId + } + return "" +} + +func (m *PartitionId) GetNamespaceId() string { + if m != nil { + return m.NamespaceId + } + return "" +} + +// A unique identifier for an entity. +// If a key's partition ID or any of its path kinds or names are +// reserved/read-only, the key is reserved/read-only. +// A reserved/read-only key is forbidden in certain documented contexts. +type Key struct { + // Entities are partitioned into subsets, currently identified by a project + // ID and namespace ID. + // Queries are scoped to a single partition. + PartitionId *PartitionId `protobuf:"bytes,1,opt,name=partition_id,json=partitionId,proto3" json:"partition_id,omitempty"` + // The entity path. + // An entity path consists of one or more elements composed of a kind and a + // string or numerical identifier, which identify entities. The first + // element identifies a _root entity_, the second element identifies + // a _child_ of the root entity, the third element identifies a child of the + // second entity, and so forth. The entities identified by all prefixes of + // the path are called the element's _ancestors_. + // + // An entity path is always fully complete: *all* of the entity's ancestors + // are required to be in the path along with the entity identifier itself. + // The only exception is that in some documented cases, the identifier in the + // last path element (for the entity) itself may be omitted. For example, + // the last path element of the key of `Mutation.insert` may have no + // identifier. + // + // A path can never be empty, and a path can have at most 100 elements. + Path []*Key_PathElement `protobuf:"bytes,2,rep,name=path,proto3" json:"path,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *Key) Reset() { *m = Key{} } +func (m *Key) String() string { return proto.CompactTextString(m) } +func (*Key) ProtoMessage() {} +func (*Key) Descriptor() ([]byte, []int) { + return fileDescriptor_entity_096a297364b049a5, []int{1} +} +func (m *Key) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_Key.Unmarshal(m, b) +} +func (m *Key) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_Key.Marshal(b, m, deterministic) +} +func (dst *Key) XXX_Merge(src proto.Message) { + xxx_messageInfo_Key.Merge(dst, src) +} +func (m *Key) XXX_Size() int { + return xxx_messageInfo_Key.Size(m) +} +func (m *Key) XXX_DiscardUnknown() { + xxx_messageInfo_Key.DiscardUnknown(m) +} + +// A (kind, ID/name) pair used to construct a key path. +// +// If either name or ID is set, the element is complete. +// If neither is set, the element is incomplete. +type Key_PathElement struct { + // The kind of the entity. + // A kind matching regex `__.*__` is reserved/read-only. + // A kind must not contain more than 1500 bytes when UTF-8 encoded. + // Cannot be `""`. + Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` + // The type of ID. + // + // Types that are valid to be assigned to IdType: + // *Key_PathElement_Id + // *Key_PathElement_Name + IdType isKey_PathElement_IdType `protobuf_oneof:"id_type"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *Key_PathElement) Reset() { *m = Key_PathElement{} } +func (m *Key_PathElement) String() string { return proto.CompactTextString(m) } +func (*Key_PathElement) ProtoMessage() {} +func (*Key_PathElement) Descriptor() ([]byte, []int) { + return fileDescriptor_entity_096a297364b049a5, []int{1, 0} +} +func (m *Key_PathElement) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_Key_PathElement.Unmarshal(m, b) +} +func (m *Key_PathElement) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_Key_PathElement.Marshal(b, m, deterministic) +} +func (dst *Key_PathElement) XXX_Merge(src proto.Message) { + xxx_messageInfo_Key_PathElement.Merge(dst, src) +} +func (m *Key_PathElement) XXX_Size() int { + return xxx_messageInfo_Key_PathElement.Size(m) +} +func (m *Key_PathElement) XXX_DiscardUnknown() { + xxx_messageInfo_Key_PathElement.DiscardUnknown(m) +} + +var xxx_messageInfo_Key_PathElement proto.InternalMessageInfo + +func (m *Key_PathElement) GetKind() string { + if m != nil { + return m.Kind + } + return "" +} + +type isKey_PathElement_IdType interface { + isKey_PathElement_IdType() +} + +type Key_PathElement_Id struct { + Id int64 `protobuf:"varint,2,opt,name=id,proto3,oneof"` +} + +type Key_PathElement_Name struct { + Name string `protobuf:"bytes,3,opt,name=name,proto3,oneof"` +} + +func (*Key_PathElement_Id) isKey_PathElement_IdType() {} + +func (*Key_PathElement_Name) isKey_PathElement_IdType() {} + +func (m *Key_PathElement) GetIdType() isKey_PathElement_IdType { + if m != nil { + return m.IdType + } + return nil +} + +func (m *Key_PathElement) GetId() int64 { + if x, ok := m.GetIdType().(*Key_PathElement_Id); ok { + return x.Id + } + return 0 +} + +func (m *Key_PathElement) GetName() string { + if x, ok := m.GetIdType().(*Key_PathElement_Name); ok { + return x.Name + } + return "" +} + +// XXX_OneofFuncs is for the internal use of the proto package. +func (*Key_PathElement) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) { + return _Key_PathElement_OneofMarshaler, _Key_PathElement_OneofUnmarshaler, _Key_PathElement_OneofSizer, []interface{}{ + (*Key_PathElement_Id)(nil), + (*Key_PathElement_Name)(nil), + } +} + +func _Key_PathElement_OneofMarshaler(msg proto.Message, b *proto.Buffer) error { + m := msg.(*Key_PathElement) + // id_type + switch x := m.IdType.(type) { + case *Key_PathElement_Id: + b.EncodeVarint(2<<3 | proto.WireVarint) + b.EncodeVarint(uint64(x.Id)) + case *Key_PathElement_Name: + b.EncodeVarint(3<<3 | proto.WireBytes) + b.EncodeStringBytes(x.Name) + case nil: + default: + return fmt.Errorf("Key_PathElement.IdType has unexpected type %T", x) + } + return nil +} + +func _Key_PathElement_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) { + m := msg.(*Key_PathElement) + switch tag { + case 2: // id_type.id + if wire != proto.WireVarint { + return true, proto.ErrInternalBadWireType + } + x, err := b.DecodeVarint() + m.IdType = &Key_PathElement_Id{int64(x)} + return true, err + case 3: // id_type.name + if wire != proto.WireBytes { + return true, proto.ErrInternalBadWireType + } + x, err := b.DecodeStringBytes() + m.IdType = &Key_PathElement_Name{x} + return true, err + default: + return false, nil + } +} + +func _Key_PathElement_OneofSizer(msg proto.Message) (n int) { + m := msg.(*Key_PathElement) + // id_type + switch x := m.IdType.(type) { + case *Key_PathElement_Id: + n += 1 // tag and wire + n += proto.SizeVarint(uint64(x.Id)) + case *Key_PathElement_Name: + n += 1 // tag and wire + n += proto.SizeVarint(uint64(len(x.Name))) + n += len(x.Name) + case nil: + default: + panic(fmt.Sprintf("proto: unexpected type %T in oneof", x)) + } + return n +} + +var fileDescriptor_entity_096a297364b049a5 = []byte{ + // 780 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x94, 0xff, 0x6e, 0xdc, 0x44, + 0x10, 0xc7, 0xed, 0xbb, 0x5c, 0x1a, 0x8f, 0xdd, 0xa4, 0x6c, 0x2a, 0x61, 0x02, 0x28, 0x26, 0x80, + 0x74, 0x02, 0xc9, 0x6e, 0xc2, 0x1f, 0x54, 0x14, 0xa4, 0x72, 0x25, 0xe0, 0x28, 0x15, 0x9c, 0x56, + 0x55, 0x24, 0x50, 0xa4, 0xd3, 0xde, 0x79, 0xeb, 0x2e, 0x67, 0xef, 0x5a, 0xf6, 0x3a, 0xaa, 0xdf, + 0x05, 0xf1, 0x00, 0x3c, 0x0a, 0x8f, 0x80, 0x78, 0x18, 0xb4, 0x3f, 0xec, 0x0b, 0xed, 0x35, 0xff, + 0x79, 0x67, 0x3e, 0xdf, 0xd9, 0xef, 0xec, 0xce, 0x1a, 0xa2, 0x5c, 0x88, 0xbc, 0xa0, 0x49, 0x46, + 0x24, 0x69, 0xa4, 0xa8, 0x69, 0x72, 0x73, 0x9a, 0x50, 0x2e, 0x99, 0xec, 0xe2, 0xaa, 0x16, 0x52, + 0xa0, 0x43, 0x43, 0xc4, 0x03, 0x11, 0xdf, 0x9c, 0x1e, 0x7d, 0x64, 0x65, 0xa4, 0x62, 0x09, 0xe1, + 0x5c, 0x48, 0x22, 0x99, 0xe0, 0x8d, 0x91, 0x0c, 0x59, 0xbd, 0x5a, 0xb6, 0x2f, 0x93, 0x46, 0xd6, + 0xed, 0x4a, 0xda, 0xec, 0xf1, 0x9b, 0x59, 0xc9, 0x4a, 0xda, 0x48, 0x52, 0x56, 0x16, 0x08, 0x2d, + 0x20, 0xbb, 0x8a, 0x26, 0x05, 0x91, 0x05, 0xcf, 0x4d, 0xe6, 0xe4, 0x17, 0xf0, 0xe7, 0xa4, 0x96, + 0x4c, 0x6d, 0x76, 0x91, 0xa1, 0x8f, 0x01, 0xaa, 0x5a, 0xfc, 0x4e, 0x57, 0x72, 0xc1, 0xb2, 0x70, + 0x14, 0xb9, 0x53, 0x0f, 0x7b, 0x36, 0x72, 0x91, 0xa1, 0x4f, 0x20, 0xe0, 0xa4, 0xa4, 0x4d, 0x45, + 0x56, 0x54, 0x01, 0x3b, 0x1a, 0xf0, 0x87, 0xd8, 0x45, 0x76, 0xf2, 0x8f, 0x0b, 0xe3, 0x4b, 0xda, + 0xa1, 0x67, 0x10, 0x54, 0x7d, 0x61, 0x85, 0xba, 0x91, 0x3b, 0xf5, 0xcf, 0xa2, 0x78, 0x4b, 0xef, + 0xf1, 0x2d, 0x07, 0xd8, 0xaf, 0x6e, 0xd9, 0x79, 0x0c, 0x3b, 0x15, 0x91, 0xaf, 0xc2, 0x51, 0x34, + 0x9e, 0xfa, 0x67, 0x9f, 0x6d, 0x15, 0x5f, 0xd2, 0x2e, 0x9e, 0x13, 0xf9, 0xea, 0xbc, 0xa0, 0x25, + 0xe5, 0x12, 0x6b, 0xc5, 0xd1, 0x0b, 0xd5, 0xd7, 0x10, 0x44, 0x08, 0x76, 0xd6, 0x8c, 0x1b, 0x17, + 0x1e, 0xd6, 0xdf, 0xe8, 0x01, 0x8c, 0x6c, 0x8f, 0xe3, 0xd4, 0xc1, 0x23, 0x96, 0xa1, 0x87, 0xb0, + 0xa3, 0x5a, 0x09, 0xc7, 0x8a, 0x4a, 0x1d, 0xac, 0x57, 0x33, 0x0f, 0xee, 0xb1, 0x6c, 0xa1, 0x8e, + 0xee, 0xe4, 0x29, 0xc0, 0xf7, 0x75, 0x4d, 0xba, 0x2b, 0x52, 0xb4, 0x14, 0x9d, 0xc1, 0xee, 0x8d, + 0xfa, 0x68, 0x42, 0x57, 0xfb, 0x3b, 0xda, 0xea, 0x4f, 0xb3, 0xd8, 0x92, 0x27, 0x7f, 0x4c, 0x60, + 0x62, 0xd4, 0x4f, 0x00, 0x78, 0x5b, 0x14, 0x0b, 0x9d, 0x08, 0xfd, 0xc8, 0x9d, 0xee, 0x6f, 0x2a, + 0xf4, 0x37, 0x19, 0xff, 0xdc, 0x16, 0x85, 0xe6, 0x53, 0x07, 0x7b, 0xbc, 0x5f, 0xa0, 0xcf, 0xe1, + 0xfe, 0x52, 0x88, 0x82, 0x12, 0x6e, 0xf5, 0xaa, 0xb1, 0xbd, 0xd4, 0xc1, 0x81, 0x0d, 0x0f, 0x18, + 0xe3, 0x92, 0xe6, 0xb4, 0xb6, 0x58, 0xdf, 0x6d, 0x60, 0xc3, 0x06, 0xfb, 0x14, 0x82, 0x4c, 0xb4, + 0xcb, 0x82, 0x5a, 0x4a, 0xf5, 0xef, 0xa6, 0x0e, 0xf6, 0x4d, 0xd4, 0x40, 0xe7, 0x70, 0x30, 0x8c, + 0x95, 0xe5, 0x40, 0xdf, 0xe9, 0xdb, 0xa6, 0x5f, 0xf4, 0x5c, 0xea, 0xe0, 0xfd, 0x41, 0x64, 0xca, + 0x7c, 0x0d, 0xde, 0x9a, 0x76, 0xb6, 0xc0, 0x44, 0x17, 0x08, 0xdf, 0x75, 0xaf, 0xa9, 0x83, 0xf7, + 0xd6, 0xb4, 0x1b, 0x4c, 0x36, 0xb2, 0x66, 0x3c, 0xb7, 0xda, 0xf7, 0xec, 0x25, 0xf9, 0x26, 0x6a, + 0xa0, 0x63, 0x80, 0x65, 0x21, 0x96, 0x16, 0x41, 0x91, 0x3b, 0x0d, 0xd4, 0xc1, 0xa9, 0x98, 0x01, + 0xbe, 0x83, 0x83, 0x9c, 0x8a, 0x45, 0x25, 0x18, 0x97, 0x96, 0xda, 0xd3, 0x26, 0x0e, 0x7b, 0x13, + 0xea, 0xa2, 0xe3, 0xe7, 0x44, 0x3e, 0xe7, 0x79, 0xea, 0xe0, 0xfb, 0x39, 0x15, 0x73, 0x05, 0x1b, + 0xf9, 0x53, 0x08, 0xcc, 0x53, 0xb6, 0xda, 0x5d, 0xad, 0xfd, 0x70, 0x6b, 0x03, 0xe7, 0x1a, 0x54, + 0x0e, 0x8d, 0xc4, 0x54, 0x98, 0x81, 0x4f, 0xd4, 0x08, 0xd9, 0x02, 0x9e, 0x2e, 0x70, 0xbc, 0xb5, + 0xc0, 0x66, 0xd4, 0x52, 0x07, 0x03, 0xd9, 0x0c, 0x5e, 0x08, 0xf7, 0x4a, 0x4a, 0x38, 0xe3, 0x79, + 0xb8, 0x1f, 0xb9, 0xd3, 0x09, 0xee, 0x97, 0xe8, 0x11, 0x3c, 0xa4, 0xaf, 0x57, 0x45, 0x9b, 0xd1, + 0xc5, 0xcb, 0x5a, 0x94, 0x0b, 0xc6, 0x33, 0xfa, 0x9a, 0x36, 0xe1, 0xa1, 0x1a, 0x0f, 0x8c, 0x6c, + 0xee, 0xc7, 0x5a, 0x94, 0x17, 0x26, 0x33, 0x0b, 0x00, 0xb4, 0x13, 0x33, 0xe0, 0xff, 0xba, 0xb0, + 0x6b, 0x7c, 0xa3, 0x2f, 0x60, 0xbc, 0xa6, 0x9d, 0x7d, 0xb7, 0xef, 0xbc, 0x22, 0xac, 0x20, 0x74, + 0xa9, 0x7f, 0x1b, 0x15, 0xad, 0x25, 0xa3, 0x4d, 0x38, 0xd6, 0xaf, 0xe1, 0xcb, 0x3b, 0x0e, 0x25, + 0x9e, 0x0f, 0xf4, 0x39, 0x97, 0x75, 0x87, 0x6f, 0xc9, 0x8f, 0x7e, 0x85, 0x83, 0x37, 0xd2, 0xe8, + 0xc1, 0xc6, 0x8b, 0x67, 0x76, 0x7c, 0x04, 0x93, 0xcd, 0x44, 0xdf, 0xfd, 0xf4, 0x0c, 0xf8, 0xcd, + 0xe8, 0xb1, 0x3b, 0xfb, 0xd3, 0x85, 0xf7, 0x57, 0xa2, 0xdc, 0x06, 0xcf, 0x7c, 0x63, 0x6d, 0xae, + 0x86, 0x78, 0xee, 0xfe, 0xf6, 0xad, 0x65, 0x72, 0x51, 0x10, 0x9e, 0xc7, 0xa2, 0xce, 0x93, 0x9c, + 0x72, 0x3d, 0xe2, 0x89, 0x49, 0x91, 0x8a, 0x35, 0xff, 0xfb, 0xcb, 0x3f, 0x19, 0x16, 0x7f, 0x8d, + 0x3e, 0xf8, 0xc9, 0xc8, 0x9f, 0x15, 0xa2, 0xcd, 0xe2, 0x1f, 0x86, 0x8d, 0xae, 0x4e, 0xff, 0xee, + 0x73, 0xd7, 0x3a, 0x77, 0x3d, 0xe4, 0xae, 0xaf, 0x4e, 0x97, 0xbb, 0x7a, 0x83, 0xaf, 0xfe, 0x0b, + 0x00, 0x00, 0xff, 0xff, 0xf3, 0xdd, 0x11, 0x96, 0x45, 0x06, 0x00, 0x00, +} + +var xxx_messageInfo_Key proto.InternalMessageInfo diff --git a/vendor/google.golang.org/appengine/datastore/key.go b/vendor/google.golang.org/appengine/datastore/key.go new file mode 100644 index 000000000000..fd598dc9657f --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/key.go @@ -0,0 +1,400 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "bytes" + "encoding/base64" + "encoding/gob" + "errors" + "fmt" + "strconv" + "strings" + + "github.com/golang/protobuf/proto" + "golang.org/x/net/context" + + "google.golang.org/appengine/internal" + pb "google.golang.org/appengine/internal/datastore" +) + +type KeyRangeCollisionError struct { + start int64 + end int64 +} + +func (e *KeyRangeCollisionError) Error() string { + return fmt.Sprintf("datastore: Collision when attempting to allocate range [%d, %d]", + e.start, e.end) +} + +type KeyRangeContentionError struct { + start int64 + end int64 +} + +func (e *KeyRangeContentionError) Error() string { + return fmt.Sprintf("datastore: Contention when attempting to allocate range [%d, %d]", + e.start, e.end) +} + +// Key represents the datastore key for a stored entity, and is immutable. +type Key struct { + kind string + stringID string + intID int64 + parent *Key + appID string + namespace string +} + +// Kind returns the key's kind (also known as entity type). +func (k *Key) Kind() string { + return k.kind +} + +// StringID returns the key's string ID (also known as an entity name or key +// name), which may be "". +func (k *Key) StringID() string { + return k.stringID +} + +// IntID returns the key's integer ID, which may be 0. +func (k *Key) IntID() int64 { + return k.intID +} + +// Parent returns the key's parent key, which may be nil. +func (k *Key) Parent() *Key { + return k.parent +} + +// AppID returns the key's application ID. +func (k *Key) AppID() string { + return k.appID +} + +// Namespace returns the key's namespace. +func (k *Key) Namespace() string { + return k.namespace +} + +// Incomplete returns whether the key does not refer to a stored entity. +// In particular, whether the key has a zero StringID and a zero IntID. +func (k *Key) Incomplete() bool { + return k.stringID == "" && k.intID == 0 +} + +// valid returns whether the key is valid. +func (k *Key) valid() bool { + if k == nil { + return false + } + for ; k != nil; k = k.parent { + if k.kind == "" || k.appID == "" { + return false + } + if k.stringID != "" && k.intID != 0 { + return false + } + if k.parent != nil { + if k.parent.Incomplete() { + return false + } + if k.parent.appID != k.appID || k.parent.namespace != k.namespace { + return false + } + } + } + return true +} + +// Equal returns whether two keys are equal. +func (k *Key) Equal(o *Key) bool { + for k != nil && o != nil { + if k.kind != o.kind || k.stringID != o.stringID || k.intID != o.intID || k.appID != o.appID || k.namespace != o.namespace { + return false + } + k, o = k.parent, o.parent + } + return k == o +} + +// root returns the furthest ancestor of a key, which may be itself. +func (k *Key) root() *Key { + for k.parent != nil { + k = k.parent + } + return k +} + +// marshal marshals the key's string representation to the buffer. +func (k *Key) marshal(b *bytes.Buffer) { + if k.parent != nil { + k.parent.marshal(b) + } + b.WriteByte('/') + b.WriteString(k.kind) + b.WriteByte(',') + if k.stringID != "" { + b.WriteString(k.stringID) + } else { + b.WriteString(strconv.FormatInt(k.intID, 10)) + } +} + +// String returns a string representation of the key. +func (k *Key) String() string { + if k == nil { + return "" + } + b := bytes.NewBuffer(make([]byte, 0, 512)) + k.marshal(b) + return b.String() +} + +type gobKey struct { + Kind string + StringID string + IntID int64 + Parent *gobKey + AppID string + Namespace string +} + +func keyToGobKey(k *Key) *gobKey { + if k == nil { + return nil + } + return &gobKey{ + Kind: k.kind, + StringID: k.stringID, + IntID: k.intID, + Parent: keyToGobKey(k.parent), + AppID: k.appID, + Namespace: k.namespace, + } +} + +func gobKeyToKey(gk *gobKey) *Key { + if gk == nil { + return nil + } + return &Key{ + kind: gk.Kind, + stringID: gk.StringID, + intID: gk.IntID, + parent: gobKeyToKey(gk.Parent), + appID: gk.AppID, + namespace: gk.Namespace, + } +} + +func (k *Key) GobEncode() ([]byte, error) { + buf := new(bytes.Buffer) + if err := gob.NewEncoder(buf).Encode(keyToGobKey(k)); err != nil { + return nil, err + } + return buf.Bytes(), nil +} + +func (k *Key) GobDecode(buf []byte) error { + gk := new(gobKey) + if err := gob.NewDecoder(bytes.NewBuffer(buf)).Decode(gk); err != nil { + return err + } + *k = *gobKeyToKey(gk) + return nil +} + +func (k *Key) MarshalJSON() ([]byte, error) { + return []byte(`"` + k.Encode() + `"`), nil +} + +func (k *Key) UnmarshalJSON(buf []byte) error { + if len(buf) < 2 || buf[0] != '"' || buf[len(buf)-1] != '"' { + return errors.New("datastore: bad JSON key") + } + k2, err := DecodeKey(string(buf[1 : len(buf)-1])) + if err != nil { + return err + } + *k = *k2 + return nil +} + +// Encode returns an opaque representation of the key +// suitable for use in HTML and URLs. +// This is compatible with the Python and Java runtimes. +func (k *Key) Encode() string { + ref := keyToProto("", k) + + b, err := proto.Marshal(ref) + if err != nil { + panic(err) + } + + // Trailing padding is stripped. + return strings.TrimRight(base64.URLEncoding.EncodeToString(b), "=") +} + +// DecodeKey decodes a key from the opaque representation returned by Encode. +func DecodeKey(encoded string) (*Key, error) { + // Re-add padding. + if m := len(encoded) % 4; m != 0 { + encoded += strings.Repeat("=", 4-m) + } + + b, err := base64.URLEncoding.DecodeString(encoded) + if err != nil { + return nil, err + } + + ref := new(pb.Reference) + if err := proto.Unmarshal(b, ref); err != nil { + // Couldn't decode it as an App Engine key, try decoding it as a key encoded by cloud.google.com/go/datastore. + if k := decodeCloudKey(encoded); k != nil { + return k, nil + } + return nil, err + } + + return protoToKey(ref) +} + +// NewIncompleteKey creates a new incomplete key. +// kind cannot be empty. +func NewIncompleteKey(c context.Context, kind string, parent *Key) *Key { + return NewKey(c, kind, "", 0, parent) +} + +// NewKey creates a new key. +// kind cannot be empty. +// Either one or both of stringID and intID must be zero. If both are zero, +// the key returned is incomplete. +// parent must either be a complete key or nil. +func NewKey(c context.Context, kind, stringID string, intID int64, parent *Key) *Key { + // If there's a parent key, use its namespace. + // Otherwise, use any namespace attached to the context. + var namespace string + if parent != nil { + namespace = parent.namespace + } else { + namespace = internal.NamespaceFromContext(c) + } + + return &Key{ + kind: kind, + stringID: stringID, + intID: intID, + parent: parent, + appID: internal.FullyQualifiedAppID(c), + namespace: namespace, + } +} + +// AllocateIDs returns a range of n integer IDs with the given kind and parent +// combination. kind cannot be empty; parent may be nil. The IDs in the range +// returned will not be used by the datastore's automatic ID sequence generator +// and may be used with NewKey without conflict. +// +// The range is inclusive at the low end and exclusive at the high end. In +// other words, valid intIDs x satisfy low <= x && x < high. +// +// If no error is returned, low + n == high. +func AllocateIDs(c context.Context, kind string, parent *Key, n int) (low, high int64, err error) { + if kind == "" { + return 0, 0, errors.New("datastore: AllocateIDs given an empty kind") + } + if n < 0 { + return 0, 0, fmt.Errorf("datastore: AllocateIDs given a negative count: %d", n) + } + if n == 0 { + return 0, 0, nil + } + req := &pb.AllocateIdsRequest{ + ModelKey: keyToProto("", NewIncompleteKey(c, kind, parent)), + Size: proto.Int64(int64(n)), + } + res := &pb.AllocateIdsResponse{} + if err := internal.Call(c, "datastore_v3", "AllocateIds", req, res); err != nil { + return 0, 0, err + } + // The protobuf is inclusive at both ends. Idiomatic Go (e.g. slices, for loops) + // is inclusive at the low end and exclusive at the high end, so we add 1. + low = res.GetStart() + high = res.GetEnd() + 1 + if low+int64(n) != high { + return 0, 0, fmt.Errorf("datastore: internal error: could not allocate %d IDs", n) + } + return low, high, nil +} + +// AllocateIDRange allocates a range of IDs with specific endpoints. +// The range is inclusive at both the low and high end. Once these IDs have been +// allocated, you can manually assign them to newly created entities. +// +// The Datastore's automatic ID allocator never assigns a key that has already +// been allocated (either through automatic ID allocation or through an explicit +// AllocateIDs call). As a result, entities written to the given key range will +// never be overwritten. However, writing entities with manually assigned keys in +// this range may overwrite existing entities (or new entities written by a separate +// request), depending on the error returned. +// +// Use this only if you have an existing numeric ID range that you want to reserve +// (for example, bulk loading entities that already have IDs). If you don't care +// about which IDs you receive, use AllocateIDs instead. +// +// AllocateIDRange returns nil if the range is successfully allocated. If one or more +// entities with an ID in the given range already exist, it returns a KeyRangeCollisionError. +// If the Datastore has already cached IDs in this range (e.g. from a previous call to +// AllocateIDRange), it returns a KeyRangeContentionError. Errors of other types indicate +// problems with arguments or an error returned directly from the Datastore. +func AllocateIDRange(c context.Context, kind string, parent *Key, start, end int64) (err error) { + if kind == "" { + return errors.New("datastore: AllocateIDRange given an empty kind") + } + + if start < 1 || end < 1 { + return errors.New("datastore: AllocateIDRange start and end must both be greater than 0") + } + + if start > end { + return errors.New("datastore: AllocateIDRange start must be before end") + } + + req := &pb.AllocateIdsRequest{ + ModelKey: keyToProto("", NewIncompleteKey(c, kind, parent)), + Max: proto.Int64(end), + } + res := &pb.AllocateIdsResponse{} + if err := internal.Call(c, "datastore_v3", "AllocateIds", req, res); err != nil { + return err + } + + // Check for collisions, i.e. existing entities with IDs in this range. + // We could do this before the allocation, but we'd still have to do it + // afterward as well to catch the race condition where an entity is inserted + // after that initial check but before the allocation. Skip the up-front check + // and just do it once. + q := NewQuery(kind).Filter("__key__ >=", NewKey(c, kind, "", start, parent)). + Filter("__key__ <=", NewKey(c, kind, "", end, parent)).KeysOnly().Limit(1) + + keys, err := q.GetAll(c, nil) + if err != nil { + return err + } + if len(keys) != 0 { + return &KeyRangeCollisionError{start: start, end: end} + } + + // Check for a race condition, i.e. cases where the datastore may have + // cached ID batches that contain IDs in this range. + if start < res.GetStart() { + return &KeyRangeContentionError{start: start, end: end} + } + + return nil +} diff --git a/vendor/google.golang.org/appengine/datastore/keycompat.go b/vendor/google.golang.org/appengine/datastore/keycompat.go new file mode 100644 index 000000000000..371a64eeefe8 --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/keycompat.go @@ -0,0 +1,89 @@ +// Copyright 2019 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "sync" + + "golang.org/x/net/context" + + "google.golang.org/appengine/datastore/internal/cloudkey" + "google.golang.org/appengine/internal" +) + +var keyConversion struct { + mu sync.RWMutex + appID string // read using getKeyConversionAppID +} + +// EnableKeyConversion enables encoded key compatibility with the Cloud +// Datastore client library (cloud.google.com/go/datastore). Encoded keys +// generated by the Cloud Datastore client library will be decoded into App +// Engine datastore keys. +// +// The context provided must be an App Engine context if running in App Engine +// first generation runtime. This can be called in the /_ah/start handler. It is +// safe to call multiple times, and is cheap to call, so can also be inserted as +// middleware. +// +// Enabling key compatibility does not affect the encoding format used by +// Key.Encode, it only expands the type of keys that are able to be decoded with +// DecodeKey. +func EnableKeyConversion(ctx context.Context) { + // Only attempt to set appID if it's unset. + // If already set, ignore. + if getKeyConversionAppID() != "" { + return + } + + keyConversion.mu.Lock() + // Check again to avoid race where another goroutine set appID between the call + // to getKeyConversionAppID above and taking the write lock. + if keyConversion.appID == "" { + keyConversion.appID = internal.FullyQualifiedAppID(ctx) + } + keyConversion.mu.Unlock() +} + +func getKeyConversionAppID() string { + keyConversion.mu.RLock() + appID := keyConversion.appID + keyConversion.mu.RUnlock() + return appID +} + +// decodeCloudKey attempts to decode the given encoded key generated by the +// Cloud Datastore client library (cloud.google.com/go/datastore), returning nil +// if the key couldn't be decoded. +func decodeCloudKey(encoded string) *Key { + appID := getKeyConversionAppID() + if appID == "" { + return nil + } + + k, err := cloudkey.DecodeKey(encoded) + if err != nil { + return nil + } + return convertCloudKey(k, appID) +} + +// convertCloudKey converts a Cloud Datastore key and converts it to an App +// Engine Datastore key. Cloud Datastore keys don't include the project/app ID, +// so we must add it back in. +func convertCloudKey(key *cloudkey.Key, appID string) *Key { + if key == nil { + return nil + } + k := &Key{ + intID: key.ID, + kind: key.Kind, + namespace: key.Namespace, + parent: convertCloudKey(key.Parent, appID), + stringID: key.Name, + appID: appID, + } + return k +} diff --git a/vendor/google.golang.org/appengine/datastore/load.go b/vendor/google.golang.org/appengine/datastore/load.go new file mode 100644 index 000000000000..38a63653979a --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/load.go @@ -0,0 +1,429 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "fmt" + "reflect" + "strings" + "time" + + "github.com/golang/protobuf/proto" + "google.golang.org/appengine" + pb "google.golang.org/appengine/internal/datastore" +) + +var ( + typeOfBlobKey = reflect.TypeOf(appengine.BlobKey("")) + typeOfByteSlice = reflect.TypeOf([]byte(nil)) + typeOfByteString = reflect.TypeOf(ByteString(nil)) + typeOfGeoPoint = reflect.TypeOf(appengine.GeoPoint{}) + typeOfTime = reflect.TypeOf(time.Time{}) + typeOfKeyPtr = reflect.TypeOf(&Key{}) + typeOfEntityPtr = reflect.TypeOf(&Entity{}) +) + +// typeMismatchReason returns a string explaining why the property p could not +// be stored in an entity field of type v.Type(). +func typeMismatchReason(pValue interface{}, v reflect.Value) string { + entityType := "empty" + switch pValue.(type) { + case int64: + entityType = "int" + case bool: + entityType = "bool" + case string: + entityType = "string" + case float64: + entityType = "float" + case *Key: + entityType = "*datastore.Key" + case time.Time: + entityType = "time.Time" + case appengine.BlobKey: + entityType = "appengine.BlobKey" + case appengine.GeoPoint: + entityType = "appengine.GeoPoint" + case ByteString: + entityType = "datastore.ByteString" + case []byte: + entityType = "[]byte" + } + return fmt.Sprintf("type mismatch: %s versus %v", entityType, v.Type()) +} + +type propertyLoader struct { + // m holds the number of times a substruct field like "Foo.Bar.Baz" has + // been seen so far. The map is constructed lazily. + m map[string]int +} + +func (l *propertyLoader) load(codec *structCodec, structValue reflect.Value, p Property, requireSlice bool) string { + var v reflect.Value + var sliceIndex int + + name := p.Name + + // If name ends with a '.', the last field is anonymous. + // In this case, strings.Split will give us "" as the + // last element of our fields slice, which will match the "" + // field name in the substruct codec. + fields := strings.Split(name, ".") + + for len(fields) > 0 { + var decoder fieldCodec + var ok bool + + // Cut off the last field (delimited by ".") and find its parent + // in the codec. + // eg. for name "A.B.C.D", split off "A.B.C" and try to + // find a field in the codec with this name. + // Loop again with "A.B", etc. + for i := len(fields); i > 0; i-- { + parent := strings.Join(fields[:i], ".") + decoder, ok = codec.fields[parent] + if ok { + fields = fields[i:] + break + } + } + + // If we never found a matching field in the codec, return + // error message. + if !ok { + return "no such struct field" + } + + v = initField(structValue, decoder.path) + if !v.IsValid() { + return "no such struct field" + } + if !v.CanSet() { + return "cannot set struct field" + } + + if decoder.structCodec != nil { + codec = decoder.structCodec + structValue = v + } + + if v.Kind() == reflect.Slice && v.Type() != typeOfByteSlice { + if l.m == nil { + l.m = make(map[string]int) + } + sliceIndex = l.m[p.Name] + l.m[p.Name] = sliceIndex + 1 + for v.Len() <= sliceIndex { + v.Set(reflect.Append(v, reflect.New(v.Type().Elem()).Elem())) + } + structValue = v.Index(sliceIndex) + requireSlice = false + } + } + + var slice reflect.Value + if v.Kind() == reflect.Slice && v.Type().Elem().Kind() != reflect.Uint8 { + slice = v + v = reflect.New(v.Type().Elem()).Elem() + } else if requireSlice { + return "multiple-valued property requires a slice field type" + } + + // Convert indexValues to a Go value with a meaning derived from the + // destination type. + pValue := p.Value + if iv, ok := pValue.(indexValue); ok { + meaning := pb.Property_NO_MEANING + switch v.Type() { + case typeOfBlobKey: + meaning = pb.Property_BLOBKEY + case typeOfByteSlice: + meaning = pb.Property_BLOB + case typeOfByteString: + meaning = pb.Property_BYTESTRING + case typeOfGeoPoint: + meaning = pb.Property_GEORSS_POINT + case typeOfTime: + meaning = pb.Property_GD_WHEN + case typeOfEntityPtr: + meaning = pb.Property_ENTITY_PROTO + } + var err error + pValue, err = propValue(iv.value, meaning) + if err != nil { + return err.Error() + } + } + + if errReason := setVal(v, pValue); errReason != "" { + // Set the slice back to its zero value. + if slice.IsValid() { + slice.Set(reflect.Zero(slice.Type())) + } + return errReason + } + + if slice.IsValid() { + slice.Index(sliceIndex).Set(v) + } + + return "" +} + +// setVal sets v to the value pValue. +func setVal(v reflect.Value, pValue interface{}) string { + switch v.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + x, ok := pValue.(int64) + if !ok && pValue != nil { + return typeMismatchReason(pValue, v) + } + if v.OverflowInt(x) { + return fmt.Sprintf("value %v overflows struct field of type %v", x, v.Type()) + } + v.SetInt(x) + case reflect.Bool: + x, ok := pValue.(bool) + if !ok && pValue != nil { + return typeMismatchReason(pValue, v) + } + v.SetBool(x) + case reflect.String: + switch x := pValue.(type) { + case appengine.BlobKey: + v.SetString(string(x)) + case ByteString: + v.SetString(string(x)) + case string: + v.SetString(x) + default: + if pValue != nil { + return typeMismatchReason(pValue, v) + } + } + case reflect.Float32, reflect.Float64: + x, ok := pValue.(float64) + if !ok && pValue != nil { + return typeMismatchReason(pValue, v) + } + if v.OverflowFloat(x) { + return fmt.Sprintf("value %v overflows struct field of type %v", x, v.Type()) + } + v.SetFloat(x) + case reflect.Ptr: + x, ok := pValue.(*Key) + if !ok && pValue != nil { + return typeMismatchReason(pValue, v) + } + if _, ok := v.Interface().(*Key); !ok { + return typeMismatchReason(pValue, v) + } + v.Set(reflect.ValueOf(x)) + case reflect.Struct: + switch v.Type() { + case typeOfTime: + x, ok := pValue.(time.Time) + if !ok && pValue != nil { + return typeMismatchReason(pValue, v) + } + v.Set(reflect.ValueOf(x)) + case typeOfGeoPoint: + x, ok := pValue.(appengine.GeoPoint) + if !ok && pValue != nil { + return typeMismatchReason(pValue, v) + } + v.Set(reflect.ValueOf(x)) + default: + ent, ok := pValue.(*Entity) + if !ok { + return typeMismatchReason(pValue, v) + } + + // Recursively load nested struct + pls, err := newStructPLS(v.Addr().Interface()) + if err != nil { + return err.Error() + } + + // if ent has a Key value and our struct has a Key field, + // load the Entity's Key value into the Key field on the struct. + if ent.Key != nil && pls.codec.keyField != -1 { + + pls.v.Field(pls.codec.keyField).Set(reflect.ValueOf(ent.Key)) + } + + err = pls.Load(ent.Properties) + if err != nil { + return err.Error() + } + } + case reflect.Slice: + x, ok := pValue.([]byte) + if !ok { + if y, yok := pValue.(ByteString); yok { + x, ok = []byte(y), true + } + } + if !ok && pValue != nil { + return typeMismatchReason(pValue, v) + } + if v.Type().Elem().Kind() != reflect.Uint8 { + return typeMismatchReason(pValue, v) + } + v.SetBytes(x) + default: + return typeMismatchReason(pValue, v) + } + return "" +} + +// initField is similar to reflect's Value.FieldByIndex, in that it +// returns the nested struct field corresponding to index, but it +// initialises any nil pointers encountered when traversing the structure. +func initField(val reflect.Value, index []int) reflect.Value { + for _, i := range index[:len(index)-1] { + val = val.Field(i) + if val.Kind() == reflect.Ptr { + if val.IsNil() { + val.Set(reflect.New(val.Type().Elem())) + } + val = val.Elem() + } + } + return val.Field(index[len(index)-1]) +} + +// loadEntity loads an EntityProto into PropertyLoadSaver or struct pointer. +func loadEntity(dst interface{}, src *pb.EntityProto) (err error) { + ent, err := protoToEntity(src) + if err != nil { + return err + } + if e, ok := dst.(PropertyLoadSaver); ok { + return e.Load(ent.Properties) + } + return LoadStruct(dst, ent.Properties) +} + +func (s structPLS) Load(props []Property) error { + var fieldName, reason string + var l propertyLoader + for _, p := range props { + if errStr := l.load(s.codec, s.v, p, p.Multiple); errStr != "" { + // We don't return early, as we try to load as many properties as possible. + // It is valid to load an entity into a struct that cannot fully represent it. + // That case returns an error, but the caller is free to ignore it. + fieldName, reason = p.Name, errStr + } + } + if reason != "" { + return &ErrFieldMismatch{ + StructType: s.v.Type(), + FieldName: fieldName, + Reason: reason, + } + } + return nil +} + +func protoToEntity(src *pb.EntityProto) (*Entity, error) { + props, rawProps := src.Property, src.RawProperty + outProps := make([]Property, 0, len(props)+len(rawProps)) + for { + var ( + x *pb.Property + noIndex bool + ) + if len(props) > 0 { + x, props = props[0], props[1:] + } else if len(rawProps) > 0 { + x, rawProps = rawProps[0], rawProps[1:] + noIndex = true + } else { + break + } + + var value interface{} + if x.Meaning != nil && *x.Meaning == pb.Property_INDEX_VALUE { + value = indexValue{x.Value} + } else { + var err error + value, err = propValue(x.Value, x.GetMeaning()) + if err != nil { + return nil, err + } + } + outProps = append(outProps, Property{ + Name: x.GetName(), + Value: value, + NoIndex: noIndex, + Multiple: x.GetMultiple(), + }) + } + + var key *Key + if src.Key != nil { + // Ignore any error, since nested entity values + // are allowed to have an invalid key. + key, _ = protoToKey(src.Key) + } + return &Entity{key, outProps}, nil +} + +// propValue returns a Go value that combines the raw PropertyValue with a +// meaning. For example, an Int64Value with GD_WHEN becomes a time.Time. +func propValue(v *pb.PropertyValue, m pb.Property_Meaning) (interface{}, error) { + switch { + case v.Int64Value != nil: + if m == pb.Property_GD_WHEN { + return fromUnixMicro(*v.Int64Value), nil + } else { + return *v.Int64Value, nil + } + case v.BooleanValue != nil: + return *v.BooleanValue, nil + case v.StringValue != nil: + if m == pb.Property_BLOB { + return []byte(*v.StringValue), nil + } else if m == pb.Property_BLOBKEY { + return appengine.BlobKey(*v.StringValue), nil + } else if m == pb.Property_BYTESTRING { + return ByteString(*v.StringValue), nil + } else if m == pb.Property_ENTITY_PROTO { + var ent pb.EntityProto + err := proto.Unmarshal([]byte(*v.StringValue), &ent) + if err != nil { + return nil, err + } + return protoToEntity(&ent) + } else { + return *v.StringValue, nil + } + case v.DoubleValue != nil: + return *v.DoubleValue, nil + case v.Referencevalue != nil: + key, err := referenceValueToKey(v.Referencevalue) + if err != nil { + return nil, err + } + return key, nil + case v.Pointvalue != nil: + // NOTE: Strangely, latitude maps to X, longitude to Y. + return appengine.GeoPoint{Lat: v.Pointvalue.GetX(), Lng: v.Pointvalue.GetY()}, nil + } + return nil, nil +} + +// indexValue is a Property value that is created when entities are loaded from +// an index, such as from a projection query. +// +// Such Property values do not contain all of the metadata required to be +// faithfully represented as a Go value, and are instead represented as an +// opaque indexValue. Load the properties into a concrete struct type (e.g. by +// passing a struct pointer to Iterator.Next) to reconstruct actual Go values +// of type int, string, time.Time, etc. +type indexValue struct { + value *pb.PropertyValue +} diff --git a/vendor/google.golang.org/appengine/datastore/metadata.go b/vendor/google.golang.org/appengine/datastore/metadata.go new file mode 100644 index 000000000000..6acacc3db9aa --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/metadata.go @@ -0,0 +1,78 @@ +// Copyright 2016 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import "golang.org/x/net/context" + +// Datastore kinds for the metadata entities. +const ( + namespaceKind = "__namespace__" + kindKind = "__kind__" + propertyKind = "__property__" +) + +// Namespaces returns all the datastore namespaces. +func Namespaces(ctx context.Context) ([]string, error) { + // TODO(djd): Support range queries. + q := NewQuery(namespaceKind).KeysOnly() + keys, err := q.GetAll(ctx, nil) + if err != nil { + return nil, err + } + // The empty namespace key uses a numeric ID (==1), but luckily + // the string ID defaults to "" for numeric IDs anyway. + return keyNames(keys), nil +} + +// Kinds returns the names of all the kinds in the current namespace. +func Kinds(ctx context.Context) ([]string, error) { + // TODO(djd): Support range queries. + q := NewQuery(kindKind).KeysOnly() + keys, err := q.GetAll(ctx, nil) + if err != nil { + return nil, err + } + return keyNames(keys), nil +} + +// keyNames returns a slice of the provided keys' names (string IDs). +func keyNames(keys []*Key) []string { + n := make([]string, 0, len(keys)) + for _, k := range keys { + n = append(n, k.StringID()) + } + return n +} + +// KindProperties returns all the indexed properties for the given kind. +// The properties are returned as a map of property names to a slice of the +// representation types. The representation types for the supported Go property +// types are: +// "INT64": signed integers and time.Time +// "DOUBLE": float32 and float64 +// "BOOLEAN": bool +// "STRING": string, []byte and ByteString +// "POINT": appengine.GeoPoint +// "REFERENCE": *Key +// "USER": (not used in the Go runtime) +func KindProperties(ctx context.Context, kind string) (map[string][]string, error) { + // TODO(djd): Support range queries. + kindKey := NewKey(ctx, kindKind, kind, 0, nil) + q := NewQuery(propertyKind).Ancestor(kindKey) + + propMap := map[string][]string{} + props := []struct { + Repr []string `datastore:"property_representation"` + }{} + + keys, err := q.GetAll(ctx, &props) + if err != nil { + return nil, err + } + for i, p := range props { + propMap[keys[i].StringID()] = p.Repr + } + return propMap, nil +} diff --git a/vendor/google.golang.org/appengine/datastore/prop.go b/vendor/google.golang.org/appengine/datastore/prop.go new file mode 100644 index 000000000000..5cb2079d8874 --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/prop.go @@ -0,0 +1,330 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "fmt" + "reflect" + "strings" + "sync" + "unicode" +) + +// Entities with more than this many indexed properties will not be saved. +const maxIndexedProperties = 20000 + +// []byte fields more than 1 megabyte long will not be loaded or saved. +const maxBlobLen = 1 << 20 + +// Property is a name/value pair plus some metadata. A datastore entity's +// contents are loaded and saved as a sequence of Properties. An entity can +// have multiple Properties with the same name, provided that p.Multiple is +// true on all of that entity's Properties with that name. +type Property struct { + // Name is the property name. + Name string + // Value is the property value. The valid types are: + // - int64 + // - bool + // - string + // - float64 + // - ByteString + // - *Key + // - time.Time + // - appengine.BlobKey + // - appengine.GeoPoint + // - []byte (up to 1 megabyte in length) + // - *Entity (representing a nested struct) + // This set is smaller than the set of valid struct field types that the + // datastore can load and save. A Property Value cannot be a slice (apart + // from []byte); use multiple Properties instead. Also, a Value's type + // must be explicitly on the list above; it is not sufficient for the + // underlying type to be on that list. For example, a Value of "type + // myInt64 int64" is invalid. Smaller-width integers and floats are also + // invalid. Again, this is more restrictive than the set of valid struct + // field types. + // + // A Value will have an opaque type when loading entities from an index, + // such as via a projection query. Load entities into a struct instead + // of a PropertyLoadSaver when using a projection query. + // + // A Value may also be the nil interface value; this is equivalent to + // Python's None but not directly representable by a Go struct. Loading + // a nil-valued property into a struct will set that field to the zero + // value. + Value interface{} + // NoIndex is whether the datastore cannot index this property. + NoIndex bool + // Multiple is whether the entity can have multiple properties with + // the same name. Even if a particular instance only has one property with + // a certain name, Multiple should be true if a struct would best represent + // it as a field of type []T instead of type T. + Multiple bool +} + +// An Entity is the value type for a nested struct. +// This type is only used for a Property's Value. +type Entity struct { + Key *Key + Properties []Property +} + +// ByteString is a short byte slice (up to 1500 bytes) that can be indexed. +type ByteString []byte + +// PropertyLoadSaver can be converted from and to a slice of Properties. +type PropertyLoadSaver interface { + Load([]Property) error + Save() ([]Property, error) +} + +// PropertyList converts a []Property to implement PropertyLoadSaver. +type PropertyList []Property + +var ( + typeOfPropertyLoadSaver = reflect.TypeOf((*PropertyLoadSaver)(nil)).Elem() + typeOfPropertyList = reflect.TypeOf(PropertyList(nil)) +) + +// Load loads all of the provided properties into l. +// It does not first reset *l to an empty slice. +func (l *PropertyList) Load(p []Property) error { + *l = append(*l, p...) + return nil +} + +// Save saves all of l's properties as a slice or Properties. +func (l *PropertyList) Save() ([]Property, error) { + return *l, nil +} + +// validPropertyName returns whether name consists of one or more valid Go +// identifiers joined by ".". +func validPropertyName(name string) bool { + if name == "" { + return false + } + for _, s := range strings.Split(name, ".") { + if s == "" { + return false + } + first := true + for _, c := range s { + if first { + first = false + if c != '_' && !unicode.IsLetter(c) { + return false + } + } else { + if c != '_' && !unicode.IsLetter(c) && !unicode.IsDigit(c) { + return false + } + } + } + } + return true +} + +// structCodec describes how to convert a struct to and from a sequence of +// properties. +type structCodec struct { + // fields gives the field codec for the structTag with the given name. + fields map[string]fieldCodec + // hasSlice is whether a struct or any of its nested or embedded structs + // has a slice-typed field (other than []byte). + hasSlice bool + // keyField is the index of a *Key field with structTag __key__. + // This field is not relevant for the top level struct, only for + // nested structs. + keyField int + // complete is whether the structCodec is complete. An incomplete + // structCodec may be encountered when walking a recursive struct. + complete bool +} + +// fieldCodec is a struct field's index and, if that struct field's type is +// itself a struct, that substruct's structCodec. +type fieldCodec struct { + // path is the index path to the field + path []int + noIndex bool + // omitEmpty indicates that the field should be omitted on save + // if empty. + omitEmpty bool + // structCodec is the codec fot the struct field at index 'path', + // or nil if the field is not a struct. + structCodec *structCodec +} + +// structCodecs collects the structCodecs that have already been calculated. +var ( + structCodecsMutex sync.Mutex + structCodecs = make(map[reflect.Type]*structCodec) +) + +// getStructCodec returns the structCodec for the given struct type. +func getStructCodec(t reflect.Type) (*structCodec, error) { + structCodecsMutex.Lock() + defer structCodecsMutex.Unlock() + return getStructCodecLocked(t) +} + +// getStructCodecLocked implements getStructCodec. The structCodecsMutex must +// be held when calling this function. +func getStructCodecLocked(t reflect.Type) (ret *structCodec, retErr error) { + c, ok := structCodecs[t] + if ok { + return c, nil + } + c = &structCodec{ + fields: make(map[string]fieldCodec), + // We initialize keyField to -1 so that the zero-value is not + // misinterpreted as index 0. + keyField: -1, + } + + // Add c to the structCodecs map before we are sure it is good. If t is + // a recursive type, it needs to find the incomplete entry for itself in + // the map. + structCodecs[t] = c + defer func() { + if retErr != nil { + delete(structCodecs, t) + } + }() + + for i := 0; i < t.NumField(); i++ { + f := t.Field(i) + // Skip unexported fields. + // Note that if f is an anonymous, unexported struct field, + // we will promote its fields. + if f.PkgPath != "" && !f.Anonymous { + continue + } + + tags := strings.Split(f.Tag.Get("datastore"), ",") + name := tags[0] + opts := make(map[string]bool) + for _, t := range tags[1:] { + opts[t] = true + } + switch { + case name == "": + if !f.Anonymous { + name = f.Name + } + case name == "-": + continue + case name == "__key__": + if f.Type != typeOfKeyPtr { + return nil, fmt.Errorf("datastore: __key__ field on struct %v is not a *datastore.Key", t) + } + c.keyField = i + case !validPropertyName(name): + return nil, fmt.Errorf("datastore: struct tag has invalid property name: %q", name) + } + + substructType, fIsSlice := reflect.Type(nil), false + switch f.Type.Kind() { + case reflect.Struct: + substructType = f.Type + case reflect.Slice: + if f.Type.Elem().Kind() == reflect.Struct { + substructType = f.Type.Elem() + } + fIsSlice = f.Type != typeOfByteSlice + c.hasSlice = c.hasSlice || fIsSlice + } + + var sub *structCodec + if substructType != nil && substructType != typeOfTime && substructType != typeOfGeoPoint { + var err error + sub, err = getStructCodecLocked(substructType) + if err != nil { + return nil, err + } + if !sub.complete { + return nil, fmt.Errorf("datastore: recursive struct: field %q", f.Name) + } + if fIsSlice && sub.hasSlice { + return nil, fmt.Errorf( + "datastore: flattening nested structs leads to a slice of slices: field %q", f.Name) + } + c.hasSlice = c.hasSlice || sub.hasSlice + // If f is an anonymous struct field, we promote the substruct's fields up to this level + // in the linked list of struct codecs. + if f.Anonymous { + for subname, subfield := range sub.fields { + if name != "" { + subname = name + "." + subname + } + if _, ok := c.fields[subname]; ok { + return nil, fmt.Errorf("datastore: struct tag has repeated property name: %q", subname) + } + c.fields[subname] = fieldCodec{ + path: append([]int{i}, subfield.path...), + noIndex: subfield.noIndex || opts["noindex"], + omitEmpty: subfield.omitEmpty, + structCodec: subfield.structCodec, + } + } + continue + } + } + + if _, ok := c.fields[name]; ok { + return nil, fmt.Errorf("datastore: struct tag has repeated property name: %q", name) + } + c.fields[name] = fieldCodec{ + path: []int{i}, + noIndex: opts["noindex"], + omitEmpty: opts["omitempty"], + structCodec: sub, + } + } + c.complete = true + return c, nil +} + +// structPLS adapts a struct to be a PropertyLoadSaver. +type structPLS struct { + v reflect.Value + codec *structCodec +} + +// newStructPLS returns a structPLS, which implements the +// PropertyLoadSaver interface, for the struct pointer p. +func newStructPLS(p interface{}) (*structPLS, error) { + v := reflect.ValueOf(p) + if v.Kind() != reflect.Ptr || v.Elem().Kind() != reflect.Struct { + return nil, ErrInvalidEntityType + } + v = v.Elem() + codec, err := getStructCodec(v.Type()) + if err != nil { + return nil, err + } + return &structPLS{v, codec}, nil +} + +// LoadStruct loads the properties from p to dst. +// dst must be a struct pointer. +func LoadStruct(dst interface{}, p []Property) error { + x, err := newStructPLS(dst) + if err != nil { + return err + } + return x.Load(p) +} + +// SaveStruct returns the properties from src as a slice of Properties. +// src must be a struct pointer. +func SaveStruct(src interface{}) ([]Property, error) { + x, err := newStructPLS(src) + if err != nil { + return nil, err + } + return x.Save() +} diff --git a/vendor/google.golang.org/appengine/datastore/query.go b/vendor/google.golang.org/appengine/datastore/query.go new file mode 100644 index 000000000000..4124534b22f3 --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/query.go @@ -0,0 +1,774 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "encoding/base64" + "errors" + "fmt" + "math" + "reflect" + "strings" + + "github.com/golang/protobuf/proto" + "golang.org/x/net/context" + + "google.golang.org/appengine/internal" + pb "google.golang.org/appengine/internal/datastore" +) + +type operator int + +const ( + lessThan operator = iota + lessEq + equal + greaterEq + greaterThan +) + +var operatorToProto = map[operator]*pb.Query_Filter_Operator{ + lessThan: pb.Query_Filter_LESS_THAN.Enum(), + lessEq: pb.Query_Filter_LESS_THAN_OR_EQUAL.Enum(), + equal: pb.Query_Filter_EQUAL.Enum(), + greaterEq: pb.Query_Filter_GREATER_THAN_OR_EQUAL.Enum(), + greaterThan: pb.Query_Filter_GREATER_THAN.Enum(), +} + +// filter is a conditional filter on query results. +type filter struct { + FieldName string + Op operator + Value interface{} +} + +type sortDirection int + +const ( + ascending sortDirection = iota + descending +) + +var sortDirectionToProto = map[sortDirection]*pb.Query_Order_Direction{ + ascending: pb.Query_Order_ASCENDING.Enum(), + descending: pb.Query_Order_DESCENDING.Enum(), +} + +// order is a sort order on query results. +type order struct { + FieldName string + Direction sortDirection +} + +// NewQuery creates a new Query for a specific entity kind. +// +// An empty kind means to return all entities, including entities created and +// managed by other App Engine features, and is called a kindless query. +// Kindless queries cannot include filters or sort orders on property values. +func NewQuery(kind string) *Query { + return &Query{ + kind: kind, + limit: -1, + } +} + +// Query represents a datastore query. +type Query struct { + kind string + ancestor *Key + filter []filter + order []order + projection []string + + distinct bool + distinctOn []string + keysOnly bool + eventual bool + limit int32 + offset int32 + count int32 + start *pb.CompiledCursor + end *pb.CompiledCursor + + err error +} + +func (q *Query) clone() *Query { + x := *q + // Copy the contents of the slice-typed fields to a new backing store. + if len(q.filter) > 0 { + x.filter = make([]filter, len(q.filter)) + copy(x.filter, q.filter) + } + if len(q.order) > 0 { + x.order = make([]order, len(q.order)) + copy(x.order, q.order) + } + return &x +} + +// Ancestor returns a derivative query with an ancestor filter. +// The ancestor should not be nil. +func (q *Query) Ancestor(ancestor *Key) *Query { + q = q.clone() + if ancestor == nil { + q.err = errors.New("datastore: nil query ancestor") + return q + } + q.ancestor = ancestor + return q +} + +// EventualConsistency returns a derivative query that returns eventually +// consistent results. +// It only has an effect on ancestor queries. +func (q *Query) EventualConsistency() *Query { + q = q.clone() + q.eventual = true + return q +} + +// Filter returns a derivative query with a field-based filter. +// The filterStr argument must be a field name followed by optional space, +// followed by an operator, one of ">", "<", ">=", "<=", or "=". +// Fields are compared against the provided value using the operator. +// Multiple filters are AND'ed together. +func (q *Query) Filter(filterStr string, value interface{}) *Query { + q = q.clone() + filterStr = strings.TrimSpace(filterStr) + if len(filterStr) < 1 { + q.err = errors.New("datastore: invalid filter: " + filterStr) + return q + } + f := filter{ + FieldName: strings.TrimRight(filterStr, " ><=!"), + Value: value, + } + switch op := strings.TrimSpace(filterStr[len(f.FieldName):]); op { + case "<=": + f.Op = lessEq + case ">=": + f.Op = greaterEq + case "<": + f.Op = lessThan + case ">": + f.Op = greaterThan + case "=": + f.Op = equal + default: + q.err = fmt.Errorf("datastore: invalid operator %q in filter %q", op, filterStr) + return q + } + q.filter = append(q.filter, f) + return q +} + +// Order returns a derivative query with a field-based sort order. Orders are +// applied in the order they are added. The default order is ascending; to sort +// in descending order prefix the fieldName with a minus sign (-). +func (q *Query) Order(fieldName string) *Query { + q = q.clone() + fieldName = strings.TrimSpace(fieldName) + o := order{ + Direction: ascending, + FieldName: fieldName, + } + if strings.HasPrefix(fieldName, "-") { + o.Direction = descending + o.FieldName = strings.TrimSpace(fieldName[1:]) + } else if strings.HasPrefix(fieldName, "+") { + q.err = fmt.Errorf("datastore: invalid order: %q", fieldName) + return q + } + if len(o.FieldName) == 0 { + q.err = errors.New("datastore: empty order") + return q + } + q.order = append(q.order, o) + return q +} + +// Project returns a derivative query that yields only the given fields. It +// cannot be used with KeysOnly. +func (q *Query) Project(fieldNames ...string) *Query { + q = q.clone() + q.projection = append([]string(nil), fieldNames...) + return q +} + +// Distinct returns a derivative query that yields de-duplicated entities with +// respect to the set of projected fields. It is only used for projection +// queries. Distinct cannot be used with DistinctOn. +func (q *Query) Distinct() *Query { + q = q.clone() + q.distinct = true + return q +} + +// DistinctOn returns a derivative query that yields de-duplicated entities with +// respect to the set of the specified fields. It is only used for projection +// queries. The field list should be a subset of the projected field list. +// DistinctOn cannot be used with Distinct. +func (q *Query) DistinctOn(fieldNames ...string) *Query { + q = q.clone() + q.distinctOn = fieldNames + return q +} + +// KeysOnly returns a derivative query that yields only keys, not keys and +// entities. It cannot be used with projection queries. +func (q *Query) KeysOnly() *Query { + q = q.clone() + q.keysOnly = true + return q +} + +// Limit returns a derivative query that has a limit on the number of results +// returned. A negative value means unlimited. +func (q *Query) Limit(limit int) *Query { + q = q.clone() + if limit < math.MinInt32 || limit > math.MaxInt32 { + q.err = errors.New("datastore: query limit overflow") + return q + } + q.limit = int32(limit) + return q +} + +// Offset returns a derivative query that has an offset of how many keys to +// skip over before returning results. A negative value is invalid. +func (q *Query) Offset(offset int) *Query { + q = q.clone() + if offset < 0 { + q.err = errors.New("datastore: negative query offset") + return q + } + if offset > math.MaxInt32 { + q.err = errors.New("datastore: query offset overflow") + return q + } + q.offset = int32(offset) + return q +} + +// BatchSize returns a derivative query to fetch the supplied number of results +// at once. This value should be greater than zero, and equal to or less than +// the Limit. +func (q *Query) BatchSize(size int) *Query { + q = q.clone() + if size <= 0 || size > math.MaxInt32 { + q.err = errors.New("datastore: query batch size overflow") + return q + } + q.count = int32(size) + return q +} + +// Start returns a derivative query with the given start point. +func (q *Query) Start(c Cursor) *Query { + q = q.clone() + if c.cc == nil { + q.err = errors.New("datastore: invalid cursor") + return q + } + q.start = c.cc + return q +} + +// End returns a derivative query with the given end point. +func (q *Query) End(c Cursor) *Query { + q = q.clone() + if c.cc == nil { + q.err = errors.New("datastore: invalid cursor") + return q + } + q.end = c.cc + return q +} + +// toProto converts the query to a protocol buffer. +func (q *Query) toProto(dst *pb.Query, appID string) error { + if len(q.projection) != 0 && q.keysOnly { + return errors.New("datastore: query cannot both project and be keys-only") + } + if len(q.distinctOn) != 0 && q.distinct { + return errors.New("datastore: query cannot be both distinct and distinct-on") + } + dst.Reset() + dst.App = proto.String(appID) + if q.kind != "" { + dst.Kind = proto.String(q.kind) + } + if q.ancestor != nil { + dst.Ancestor = keyToProto(appID, q.ancestor) + if q.eventual { + dst.Strong = proto.Bool(false) + } + } + if q.projection != nil { + dst.PropertyName = q.projection + if len(q.distinctOn) != 0 { + dst.GroupByPropertyName = q.distinctOn + } + if q.distinct { + dst.GroupByPropertyName = q.projection + } + } + if q.keysOnly { + dst.KeysOnly = proto.Bool(true) + dst.RequirePerfectPlan = proto.Bool(true) + } + for _, qf := range q.filter { + if qf.FieldName == "" { + return errors.New("datastore: empty query filter field name") + } + p, errStr := valueToProto(appID, qf.FieldName, reflect.ValueOf(qf.Value), false) + if errStr != "" { + return errors.New("datastore: bad query filter value type: " + errStr) + } + xf := &pb.Query_Filter{ + Op: operatorToProto[qf.Op], + Property: []*pb.Property{p}, + } + if xf.Op == nil { + return errors.New("datastore: unknown query filter operator") + } + dst.Filter = append(dst.Filter, xf) + } + for _, qo := range q.order { + if qo.FieldName == "" { + return errors.New("datastore: empty query order field name") + } + xo := &pb.Query_Order{ + Property: proto.String(qo.FieldName), + Direction: sortDirectionToProto[qo.Direction], + } + if xo.Direction == nil { + return errors.New("datastore: unknown query order direction") + } + dst.Order = append(dst.Order, xo) + } + if q.limit >= 0 { + dst.Limit = proto.Int32(q.limit) + } + if q.offset != 0 { + dst.Offset = proto.Int32(q.offset) + } + if q.count != 0 { + dst.Count = proto.Int32(q.count) + } + dst.CompiledCursor = q.start + dst.EndCompiledCursor = q.end + dst.Compile = proto.Bool(true) + return nil +} + +// Count returns the number of results for the query. +// +// The running time and number of API calls made by Count scale linearly with +// the sum of the query's offset and limit. Unless the result count is +// expected to be small, it is best to specify a limit; otherwise Count will +// continue until it finishes counting or the provided context expires. +func (q *Query) Count(c context.Context) (int, error) { + // Check that the query is well-formed. + if q.err != nil { + return 0, q.err + } + + // Run a copy of the query, with keysOnly true (if we're not a projection, + // since the two are incompatible), and an adjusted offset. We also set the + // limit to zero, as we don't want any actual entity data, just the number + // of skipped results. + newQ := q.clone() + newQ.keysOnly = len(newQ.projection) == 0 + newQ.limit = 0 + if q.limit < 0 { + // If the original query was unlimited, set the new query's offset to maximum. + newQ.offset = math.MaxInt32 + } else { + newQ.offset = q.offset + q.limit + if newQ.offset < 0 { + // Do the best we can, in the presence of overflow. + newQ.offset = math.MaxInt32 + } + } + req := &pb.Query{} + if err := newQ.toProto(req, internal.FullyQualifiedAppID(c)); err != nil { + return 0, err + } + res := &pb.QueryResult{} + if err := internal.Call(c, "datastore_v3", "RunQuery", req, res); err != nil { + return 0, err + } + + // n is the count we will return. For example, suppose that our original + // query had an offset of 4 and a limit of 2008: the count will be 2008, + // provided that there are at least 2012 matching entities. However, the + // RPCs will only skip 1000 results at a time. The RPC sequence is: + // call RunQuery with (offset, limit) = (2012, 0) // 2012 == newQ.offset + // response has (skippedResults, moreResults) = (1000, true) + // n += 1000 // n == 1000 + // call Next with (offset, limit) = (1012, 0) // 1012 == newQ.offset - n + // response has (skippedResults, moreResults) = (1000, true) + // n += 1000 // n == 2000 + // call Next with (offset, limit) = (12, 0) // 12 == newQ.offset - n + // response has (skippedResults, moreResults) = (12, false) + // n += 12 // n == 2012 + // // exit the loop + // n -= 4 // n == 2008 + var n int32 + for { + // The QueryResult should have no actual entity data, just skipped results. + if len(res.Result) != 0 { + return 0, errors.New("datastore: internal error: Count request returned too much data") + } + n += res.GetSkippedResults() + if !res.GetMoreResults() { + break + } + if err := callNext(c, res, newQ.offset-n, q.count); err != nil { + return 0, err + } + } + n -= q.offset + if n < 0 { + // If the offset was greater than the number of matching entities, + // return 0 instead of negative. + n = 0 + } + return int(n), nil +} + +// callNext issues a datastore_v3/Next RPC to advance a cursor, such as that +// returned by a query with more results. +func callNext(c context.Context, res *pb.QueryResult, offset, count int32) error { + if res.Cursor == nil { + return errors.New("datastore: internal error: server did not return a cursor") + } + req := &pb.NextRequest{ + Cursor: res.Cursor, + } + if count >= 0 { + req.Count = proto.Int32(count) + } + if offset != 0 { + req.Offset = proto.Int32(offset) + } + if res.CompiledCursor != nil { + req.Compile = proto.Bool(true) + } + res.Reset() + return internal.Call(c, "datastore_v3", "Next", req, res) +} + +// GetAll runs the query in the given context and returns all keys that match +// that query, as well as appending the values to dst. +// +// dst must have type *[]S or *[]*S or *[]P, for some struct type S or some non- +// interface, non-pointer type P such that P or *P implements PropertyLoadSaver. +// +// As a special case, *PropertyList is an invalid type for dst, even though a +// PropertyList is a slice of structs. It is treated as invalid to avoid being +// mistakenly passed when *[]PropertyList was intended. +// +// The keys returned by GetAll will be in a 1-1 correspondence with the entities +// added to dst. +// +// If q is a ``keys-only'' query, GetAll ignores dst and only returns the keys. +// +// The running time and number of API calls made by GetAll scale linearly with +// the sum of the query's offset and limit. Unless the result count is +// expected to be small, it is best to specify a limit; otherwise GetAll will +// continue until it finishes collecting results or the provided context +// expires. +func (q *Query) GetAll(c context.Context, dst interface{}) ([]*Key, error) { + var ( + dv reflect.Value + mat multiArgType + elemType reflect.Type + errFieldMismatch error + ) + if !q.keysOnly { + dv = reflect.ValueOf(dst) + if dv.Kind() != reflect.Ptr || dv.IsNil() { + return nil, ErrInvalidEntityType + } + dv = dv.Elem() + mat, elemType = checkMultiArg(dv) + if mat == multiArgTypeInvalid || mat == multiArgTypeInterface { + return nil, ErrInvalidEntityType + } + } + + var keys []*Key + for t := q.Run(c); ; { + k, e, err := t.next() + if err == Done { + break + } + if err != nil { + return keys, err + } + if !q.keysOnly { + ev := reflect.New(elemType) + if elemType.Kind() == reflect.Map { + // This is a special case. The zero values of a map type are + // not immediately useful; they have to be make'd. + // + // Funcs and channels are similar, in that a zero value is not useful, + // but even a freshly make'd channel isn't useful: there's no fixed + // channel buffer size that is always going to be large enough, and + // there's no goroutine to drain the other end. Theoretically, these + // types could be supported, for example by sniffing for a constructor + // method or requiring prior registration, but for now it's not a + // frequent enough concern to be worth it. Programmers can work around + // it by explicitly using Iterator.Next instead of the Query.GetAll + // convenience method. + x := reflect.MakeMap(elemType) + ev.Elem().Set(x) + } + if err = loadEntity(ev.Interface(), e); err != nil { + if _, ok := err.(*ErrFieldMismatch); ok { + // We continue loading entities even in the face of field mismatch errors. + // If we encounter any other error, that other error is returned. Otherwise, + // an ErrFieldMismatch is returned. + errFieldMismatch = err + } else { + return keys, err + } + } + if mat != multiArgTypeStructPtr { + ev = ev.Elem() + } + dv.Set(reflect.Append(dv, ev)) + } + keys = append(keys, k) + } + return keys, errFieldMismatch +} + +// Run runs the query in the given context. +func (q *Query) Run(c context.Context) *Iterator { + if q.err != nil { + return &Iterator{err: q.err} + } + t := &Iterator{ + c: c, + limit: q.limit, + count: q.count, + q: q, + prevCC: q.start, + } + var req pb.Query + if err := q.toProto(&req, internal.FullyQualifiedAppID(c)); err != nil { + t.err = err + return t + } + if err := internal.Call(c, "datastore_v3", "RunQuery", &req, &t.res); err != nil { + t.err = err + return t + } + offset := q.offset - t.res.GetSkippedResults() + var count int32 + if t.count > 0 && (t.limit < 0 || t.count < t.limit) { + count = t.count + } else { + count = t.limit + } + for offset > 0 && t.res.GetMoreResults() { + t.prevCC = t.res.CompiledCursor + if err := callNext(t.c, &t.res, offset, count); err != nil { + t.err = err + break + } + skip := t.res.GetSkippedResults() + if skip < 0 { + t.err = errors.New("datastore: internal error: negative number of skipped_results") + break + } + offset -= skip + } + if offset < 0 { + t.err = errors.New("datastore: internal error: query offset was overshot") + } + return t +} + +// Iterator is the result of running a query. +type Iterator struct { + c context.Context + err error + // res is the result of the most recent RunQuery or Next API call. + res pb.QueryResult + // i is how many elements of res.Result we have iterated over. + i int + // limit is the limit on the number of results this iterator should return. + // A negative value means unlimited. + limit int32 + // count is the number of results this iterator should fetch at once. This + // should be equal to or greater than zero. + count int32 + // q is the original query which yielded this iterator. + q *Query + // prevCC is the compiled cursor that marks the end of the previous batch + // of results. + prevCC *pb.CompiledCursor +} + +// Done is returned when a query iteration has completed. +var Done = errors.New("datastore: query has no more results") + +// Next returns the key of the next result. When there are no more results, +// Done is returned as the error. +// +// If the query is not keys only and dst is non-nil, it also loads the entity +// stored for that key into the struct pointer or PropertyLoadSaver dst, with +// the same semantics and possible errors as for the Get function. +func (t *Iterator) Next(dst interface{}) (*Key, error) { + k, e, err := t.next() + if err != nil { + return nil, err + } + if dst != nil && !t.q.keysOnly { + err = loadEntity(dst, e) + } + return k, err +} + +func (t *Iterator) next() (*Key, *pb.EntityProto, error) { + if t.err != nil { + return nil, nil, t.err + } + + // Issue datastore_v3/Next RPCs as necessary. + for t.i == len(t.res.Result) { + if !t.res.GetMoreResults() { + t.err = Done + return nil, nil, t.err + } + t.prevCC = t.res.CompiledCursor + var count int32 + if t.count > 0 && (t.limit < 0 || t.count < t.limit) { + count = t.count + } else { + count = t.limit + } + if err := callNext(t.c, &t.res, 0, count); err != nil { + t.err = err + return nil, nil, t.err + } + if t.res.GetSkippedResults() != 0 { + t.err = errors.New("datastore: internal error: iterator has skipped results") + return nil, nil, t.err + } + t.i = 0 + if t.limit >= 0 { + t.limit -= int32(len(t.res.Result)) + if t.limit < 0 { + t.err = errors.New("datastore: internal error: query returned more results than the limit") + return nil, nil, t.err + } + } + } + + // Extract the key from the t.i'th element of t.res.Result. + e := t.res.Result[t.i] + t.i++ + if e.Key == nil { + return nil, nil, errors.New("datastore: internal error: server did not return a key") + } + k, err := protoToKey(e.Key) + if err != nil || k.Incomplete() { + return nil, nil, errors.New("datastore: internal error: server returned an invalid key") + } + return k, e, nil +} + +// Cursor returns a cursor for the iterator's current location. +func (t *Iterator) Cursor() (Cursor, error) { + if t.err != nil && t.err != Done { + return Cursor{}, t.err + } + // If we are at either end of the current batch of results, + // return the compiled cursor at that end. + skipped := t.res.GetSkippedResults() + if t.i == 0 && skipped == 0 { + if t.prevCC == nil { + // A nil pointer (of type *pb.CompiledCursor) means no constraint: + // passing it as the end cursor of a new query means unlimited results + // (glossing over the integer limit parameter for now). + // A non-nil pointer to an empty pb.CompiledCursor means the start: + // passing it as the end cursor of a new query means 0 results. + // If prevCC was nil, then the original query had no start cursor, but + // Iterator.Cursor should return "the start" instead of unlimited. + return Cursor{&zeroCC}, nil + } + return Cursor{t.prevCC}, nil + } + if t.i == len(t.res.Result) { + return Cursor{t.res.CompiledCursor}, nil + } + // Otherwise, re-run the query offset to this iterator's position, starting from + // the most recent compiled cursor. This is done on a best-effort basis, as it + // is racy; if a concurrent process has added or removed entities, then the + // cursor returned may be inconsistent. + q := t.q.clone() + q.start = t.prevCC + q.offset = skipped + int32(t.i) + q.limit = 0 + q.keysOnly = len(q.projection) == 0 + t1 := q.Run(t.c) + _, _, err := t1.next() + if err != Done { + if err == nil { + err = fmt.Errorf("datastore: internal error: zero-limit query did not have zero results") + } + return Cursor{}, err + } + return Cursor{t1.res.CompiledCursor}, nil +} + +var zeroCC pb.CompiledCursor + +// Cursor is an iterator's position. It can be converted to and from an opaque +// string. A cursor can be used from different HTTP requests, but only with a +// query with the same kind, ancestor, filter and order constraints. +type Cursor struct { + cc *pb.CompiledCursor +} + +// String returns a base-64 string representation of a cursor. +func (c Cursor) String() string { + if c.cc == nil { + return "" + } + b, err := proto.Marshal(c.cc) + if err != nil { + // The only way to construct a Cursor with a non-nil cc field is to + // unmarshal from the byte representation. We panic if the unmarshal + // succeeds but the marshaling of the unchanged protobuf value fails. + panic(fmt.Sprintf("datastore: internal error: malformed cursor: %v", err)) + } + return strings.TrimRight(base64.URLEncoding.EncodeToString(b), "=") +} + +// Decode decodes a cursor from its base-64 string representation. +func DecodeCursor(s string) (Cursor, error) { + if s == "" { + return Cursor{&zeroCC}, nil + } + if n := len(s) % 4; n != 0 { + s += strings.Repeat("=", 4-n) + } + b, err := base64.URLEncoding.DecodeString(s) + if err != nil { + return Cursor{}, err + } + cc := &pb.CompiledCursor{} + if err := proto.Unmarshal(b, cc); err != nil { + return Cursor{}, err + } + return Cursor{cc}, nil +} diff --git a/vendor/google.golang.org/appengine/datastore/save.go b/vendor/google.golang.org/appengine/datastore/save.go new file mode 100644 index 000000000000..7b045a595568 --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/save.go @@ -0,0 +1,333 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "errors" + "fmt" + "math" + "reflect" + "time" + + "github.com/golang/protobuf/proto" + + "google.golang.org/appengine" + pb "google.golang.org/appengine/internal/datastore" +) + +func toUnixMicro(t time.Time) int64 { + // We cannot use t.UnixNano() / 1e3 because we want to handle times more than + // 2^63 nanoseconds (which is about 292 years) away from 1970, and those cannot + // be represented in the numerator of a single int64 divide. + return t.Unix()*1e6 + int64(t.Nanosecond()/1e3) +} + +func fromUnixMicro(t int64) time.Time { + return time.Unix(t/1e6, (t%1e6)*1e3).UTC() +} + +var ( + minTime = time.Unix(int64(math.MinInt64)/1e6, (int64(math.MinInt64)%1e6)*1e3) + maxTime = time.Unix(int64(math.MaxInt64)/1e6, (int64(math.MaxInt64)%1e6)*1e3) +) + +// valueToProto converts a named value to a newly allocated Property. +// The returned error string is empty on success. +func valueToProto(defaultAppID, name string, v reflect.Value, multiple bool) (p *pb.Property, errStr string) { + var ( + pv pb.PropertyValue + unsupported bool + ) + switch v.Kind() { + case reflect.Invalid: + // No-op. + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + pv.Int64Value = proto.Int64(v.Int()) + case reflect.Bool: + pv.BooleanValue = proto.Bool(v.Bool()) + case reflect.String: + pv.StringValue = proto.String(v.String()) + case reflect.Float32, reflect.Float64: + pv.DoubleValue = proto.Float64(v.Float()) + case reflect.Ptr: + if k, ok := v.Interface().(*Key); ok { + if k != nil { + pv.Referencevalue = keyToReferenceValue(defaultAppID, k) + } + } else { + unsupported = true + } + case reflect.Struct: + switch t := v.Interface().(type) { + case time.Time: + if t.Before(minTime) || t.After(maxTime) { + return nil, "time value out of range" + } + pv.Int64Value = proto.Int64(toUnixMicro(t)) + case appengine.GeoPoint: + if !t.Valid() { + return nil, "invalid GeoPoint value" + } + // NOTE: Strangely, latitude maps to X, longitude to Y. + pv.Pointvalue = &pb.PropertyValue_PointValue{X: &t.Lat, Y: &t.Lng} + default: + unsupported = true + } + case reflect.Slice: + if b, ok := v.Interface().([]byte); ok { + pv.StringValue = proto.String(string(b)) + } else { + // nvToProto should already catch slice values. + // If we get here, we have a slice of slice values. + unsupported = true + } + default: + unsupported = true + } + if unsupported { + return nil, "unsupported datastore value type: " + v.Type().String() + } + p = &pb.Property{ + Name: proto.String(name), + Value: &pv, + Multiple: proto.Bool(multiple), + } + if v.IsValid() { + switch v.Interface().(type) { + case []byte: + p.Meaning = pb.Property_BLOB.Enum() + case ByteString: + p.Meaning = pb.Property_BYTESTRING.Enum() + case appengine.BlobKey: + p.Meaning = pb.Property_BLOBKEY.Enum() + case time.Time: + p.Meaning = pb.Property_GD_WHEN.Enum() + case appengine.GeoPoint: + p.Meaning = pb.Property_GEORSS_POINT.Enum() + } + } + return p, "" +} + +type saveOpts struct { + noIndex bool + multiple bool + omitEmpty bool +} + +// saveEntity saves an EntityProto into a PropertyLoadSaver or struct pointer. +func saveEntity(defaultAppID string, key *Key, src interface{}) (*pb.EntityProto, error) { + var err error + var props []Property + if e, ok := src.(PropertyLoadSaver); ok { + props, err = e.Save() + } else { + props, err = SaveStruct(src) + } + if err != nil { + return nil, err + } + return propertiesToProto(defaultAppID, key, props) +} + +func saveStructProperty(props *[]Property, name string, opts saveOpts, v reflect.Value) error { + if opts.omitEmpty && isEmptyValue(v) { + return nil + } + p := Property{ + Name: name, + NoIndex: opts.noIndex, + Multiple: opts.multiple, + } + switch x := v.Interface().(type) { + case *Key: + p.Value = x + case time.Time: + p.Value = x + case appengine.BlobKey: + p.Value = x + case appengine.GeoPoint: + p.Value = x + case ByteString: + p.Value = x + default: + switch v.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + p.Value = v.Int() + case reflect.Bool: + p.Value = v.Bool() + case reflect.String: + p.Value = v.String() + case reflect.Float32, reflect.Float64: + p.Value = v.Float() + case reflect.Slice: + if v.Type().Elem().Kind() == reflect.Uint8 { + p.NoIndex = true + p.Value = v.Bytes() + } + case reflect.Struct: + if !v.CanAddr() { + return fmt.Errorf("datastore: unsupported struct field: value is unaddressable") + } + sub, err := newStructPLS(v.Addr().Interface()) + if err != nil { + return fmt.Errorf("datastore: unsupported struct field: %v", err) + } + return sub.save(props, name+".", opts) + } + } + if p.Value == nil { + return fmt.Errorf("datastore: unsupported struct field type: %v", v.Type()) + } + *props = append(*props, p) + return nil +} + +func (s structPLS) Save() ([]Property, error) { + var props []Property + if err := s.save(&props, "", saveOpts{}); err != nil { + return nil, err + } + return props, nil +} + +func (s structPLS) save(props *[]Property, prefix string, opts saveOpts) error { + for name, f := range s.codec.fields { + name = prefix + name + v := s.v.FieldByIndex(f.path) + if !v.IsValid() || !v.CanSet() { + continue + } + var opts1 saveOpts + opts1.noIndex = opts.noIndex || f.noIndex + opts1.multiple = opts.multiple + opts1.omitEmpty = f.omitEmpty // don't propagate + // For slice fields that aren't []byte, save each element. + if v.Kind() == reflect.Slice && v.Type().Elem().Kind() != reflect.Uint8 { + opts1.multiple = true + for j := 0; j < v.Len(); j++ { + if err := saveStructProperty(props, name, opts1, v.Index(j)); err != nil { + return err + } + } + continue + } + // Otherwise, save the field itself. + if err := saveStructProperty(props, name, opts1, v); err != nil { + return err + } + } + return nil +} + +func propertiesToProto(defaultAppID string, key *Key, props []Property) (*pb.EntityProto, error) { + e := &pb.EntityProto{ + Key: keyToProto(defaultAppID, key), + } + if key.parent == nil { + e.EntityGroup = &pb.Path{} + } else { + e.EntityGroup = keyToProto(defaultAppID, key.root()).Path + } + prevMultiple := make(map[string]bool) + + for _, p := range props { + if pm, ok := prevMultiple[p.Name]; ok { + if !pm || !p.Multiple { + return nil, fmt.Errorf("datastore: multiple Properties with Name %q, but Multiple is false", p.Name) + } + } else { + prevMultiple[p.Name] = p.Multiple + } + + x := &pb.Property{ + Name: proto.String(p.Name), + Value: new(pb.PropertyValue), + Multiple: proto.Bool(p.Multiple), + } + switch v := p.Value.(type) { + case int64: + x.Value.Int64Value = proto.Int64(v) + case bool: + x.Value.BooleanValue = proto.Bool(v) + case string: + x.Value.StringValue = proto.String(v) + if p.NoIndex { + x.Meaning = pb.Property_TEXT.Enum() + } + case float64: + x.Value.DoubleValue = proto.Float64(v) + case *Key: + if v != nil { + x.Value.Referencevalue = keyToReferenceValue(defaultAppID, v) + } + case time.Time: + if v.Before(minTime) || v.After(maxTime) { + return nil, fmt.Errorf("datastore: time value out of range") + } + x.Value.Int64Value = proto.Int64(toUnixMicro(v)) + x.Meaning = pb.Property_GD_WHEN.Enum() + case appengine.BlobKey: + x.Value.StringValue = proto.String(string(v)) + x.Meaning = pb.Property_BLOBKEY.Enum() + case appengine.GeoPoint: + if !v.Valid() { + return nil, fmt.Errorf("datastore: invalid GeoPoint value") + } + // NOTE: Strangely, latitude maps to X, longitude to Y. + x.Value.Pointvalue = &pb.PropertyValue_PointValue{X: &v.Lat, Y: &v.Lng} + x.Meaning = pb.Property_GEORSS_POINT.Enum() + case []byte: + x.Value.StringValue = proto.String(string(v)) + x.Meaning = pb.Property_BLOB.Enum() + if !p.NoIndex { + return nil, fmt.Errorf("datastore: cannot index a []byte valued Property with Name %q", p.Name) + } + case ByteString: + x.Value.StringValue = proto.String(string(v)) + x.Meaning = pb.Property_BYTESTRING.Enum() + default: + if p.Value != nil { + return nil, fmt.Errorf("datastore: invalid Value type for a Property with Name %q", p.Name) + } + } + + if p.NoIndex { + e.RawProperty = append(e.RawProperty, x) + } else { + e.Property = append(e.Property, x) + if len(e.Property) > maxIndexedProperties { + return nil, errors.New("datastore: too many indexed properties") + } + } + } + return e, nil +} + +// isEmptyValue is taken from the encoding/json package in the standard library. +func isEmptyValue(v reflect.Value) bool { + switch v.Kind() { + case reflect.Array, reflect.Map, reflect.Slice, reflect.String: + // TODO(perfomance): Only reflect.String needed, other property types are not supported (copy/paste from json package) + return v.Len() == 0 + case reflect.Bool: + return !v.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + // TODO(perfomance): Uint* are unsupported property types - should be removed (copy/paste from json package) + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Interface, reflect.Ptr: + return v.IsNil() + case reflect.Struct: + switch x := v.Interface().(type) { + case time.Time: + return x.IsZero() + } + } + return false +} diff --git a/vendor/google.golang.org/appengine/datastore/transaction.go b/vendor/google.golang.org/appengine/datastore/transaction.go new file mode 100644 index 000000000000..2ae8428f856a --- /dev/null +++ b/vendor/google.golang.org/appengine/datastore/transaction.go @@ -0,0 +1,96 @@ +// Copyright 2011 Google Inc. All rights reserved. +// Use of this source code is governed by the Apache 2.0 +// license that can be found in the LICENSE file. + +package datastore + +import ( + "errors" + + "golang.org/x/net/context" + + "google.golang.org/appengine/internal" + pb "google.golang.org/appengine/internal/datastore" +) + +func init() { + internal.RegisterTransactionSetter(func(x *pb.Query, t *pb.Transaction) { + x.Transaction = t + }) + internal.RegisterTransactionSetter(func(x *pb.GetRequest, t *pb.Transaction) { + x.Transaction = t + }) + internal.RegisterTransactionSetter(func(x *pb.PutRequest, t *pb.Transaction) { + x.Transaction = t + }) + internal.RegisterTransactionSetter(func(x *pb.DeleteRequest, t *pb.Transaction) { + x.Transaction = t + }) +} + +// ErrConcurrentTransaction is returned when a transaction is rolled back due +// to a conflict with a concurrent transaction. +var ErrConcurrentTransaction = errors.New("datastore: concurrent transaction") + +// RunInTransaction runs f in a transaction. It calls f with a transaction +// context tc that f should use for all App Engine operations. +// +// If f returns nil, RunInTransaction attempts to commit the transaction, +// returning nil if it succeeds. If the commit fails due to a conflicting +// transaction, RunInTransaction retries f, each time with a new transaction +// context. It gives up and returns ErrConcurrentTransaction after three +// failed attempts. The number of attempts can be configured by specifying +// TransactionOptions.Attempts. +// +// If f returns non-nil, then any datastore changes will not be applied and +// RunInTransaction returns that same error. The function f is not retried. +// +// Note that when f returns, the transaction is not yet committed. Calling code +// must be careful not to assume that any of f's changes have been committed +// until RunInTransaction returns nil. +// +// Since f may be called multiple times, f should usually be idempotent. +// datastore.Get is not idempotent when unmarshaling slice fields. +// +// Nested transactions are not supported; c may not be a transaction context. +func RunInTransaction(c context.Context, f func(tc context.Context) error, opts *TransactionOptions) error { + xg := false + if opts != nil { + xg = opts.XG + } + readOnly := false + if opts != nil { + readOnly = opts.ReadOnly + } + attempts := 3 + if opts != nil && opts.Attempts > 0 { + attempts = opts.Attempts + } + var t *pb.Transaction + var err error + for i := 0; i < attempts; i++ { + if t, err = internal.RunTransactionOnce(c, f, xg, readOnly, t); err != internal.ErrConcurrentTransaction { + return err + } + } + return ErrConcurrentTransaction +} + +// TransactionOptions are the options for running a transaction. +type TransactionOptions struct { + // XG is whether the transaction can cross multiple entity groups. In + // comparison, a single group transaction is one where all datastore keys + // used have the same root key. Note that cross group transactions do not + // have the same behavior as single group transactions. In particular, it + // is much more likely to see partially applied transactions in different + // entity groups, in global queries. + // It is valid to set XG to true even if the transaction is within a + // single entity group. + XG bool + // Attempts controls the number of retries to perform when commits fail + // due to a conflicting transaction. If omitted, it defaults to 3. + Attempts int + // ReadOnly controls whether the transaction is a read only transaction. + // Read only transactions are potentially more efficient. + ReadOnly bool +} diff --git a/vendor/gopkg.in/goracle.v2/CHANGELOG.md b/vendor/gopkg.in/goracle.v2/CHANGELOG.md deleted file mode 100644 index 7dd02ae08e3a..000000000000 --- a/vendor/gopkg.in/goracle.v2/CHANGELOG.md +++ /dev/null @@ -1,216 +0,0 @@ -# Changelog -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) -and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). - -## [Unreleased] - -## [2.15.3] - 2019-05-16 -### Changed -- ParseConnString: reorder logic to allow 'sys/... as sysdba' (without @) - -## [2.15.3] - 2019-05-16 -### Changed -- ParseConnString: reorder logic to allow 'sys/... as sysdba' (without @) - -## [2.15.2] - 2019-05-12 -### Changed -- Use time.Local if it equals with DBTIMEZONE (use DST of time.Local). - -## [2.15.1] - 2019-05-09 -### Changed -- Fix heterogenous pools (broken with 2.14.1) - -## [2.15.0] - 2019-05-09 -### Added -- Implement dataGetObject to access custom user types -- Add ObjectScanner and ObjectWriter interfaces to provide a way to load/update values from/to a struct and database object type. - -## [2.14.2] - 2019-05-07 -### Added -- Cache timezone with the pool and in the conn struct, too. - -## [2.14.1] - 2019-05-07 -- Try to get the serve DBTIMEZONE, if fails use time.Local - -## [2.14.0] - 2019-05-07 -### Changed -- Default to time.Local in DATE types when sending to DB, too. - -## [2.13.2] - 2019-05-07 -### Changed -- Default to time.Local timezone for DATE types. - -## [2.13.1] - 2019-05-06 -### Changed -- Fix 'INTERVAL DAY TO SECOND' NULL case. - -## [2.12.8] - 2019-05-02 -### Added -- NewConnector, NewSessionIniter - -## [2.12.7] - 2019-04-24 -### Changed -- ODPI-C v3.1.4 (rowcount for PL/SQL block) - -## [2.12.6] - 2019-04-12 -### Added -- Allow calling with LOB got from DB, and don't copy it - see #135. - -## [2.12.5] - 2019-04-03 -### Added -- Make it compile under Go 1.9. - -## [2.12.4] - 2019-03-13 -## Added -- Upgrade to ODPI-C v3.1.3 - -## [2.12.3] - 2019-02-20 -### Changed -- Use ODPI-C v3.1.1 -### Added -- Make goracle.drv implement driver.DriverContext with OpenConnector. - -## [2.12.2] - 2019-02-15 -### Changed -- Use ODPI-C v3.1.1 - -## [2.12.0] - 2019-01-21 -### Changed -- Use ODPI-C v3.1.0 - -## [2.11.2] - 2019-01-15 -### Changed -- ISOLATION LEVEL READ COMMITTED (typo) fix. - -## [2.11.1] - 2018-12-13 -### Changed -- Use C.dpiAuthMode, C.dpiStartupMode, C.dpiShutdownMode instead of C.uint - for #129. - -## [2.11.0] - 2018-12-13 -### Changed -- Do not set empty SID from ORACLE_SID/TWO_TASK environment variables, leave it to ODPI. - -### Added -- Allow PRELIM authentication to allow Startup and Shutdown. - -## [2.10.1] - 2018-11-23 -### Changed -- Don't call SET TRANSACTION if not really needed in BeginTx - if the isolation level hasn't changed. - -## [2.10.0] - 2018-11-18 -### Added -- Implement RowsNextResultSet to return implicit result sets set by DBMS_SQL.return. -- Allow using heterogeneous pools with user set with ContextWithUserPassw. - -## [2.9.1] - 2018-11-14 -### Added -- allow RETURNING with empty result set (such as UPDATE). -- Allow SELECT to return object types. - -### Changed -- fixed Number.MarshalJSON (see #112)' - -## [2.9.0] - 2018-10-12 -### Changed -- The default type for BLOB is []byte and for CLOB is a string - no need for ClobAsString() option. - -## [2.8.2] - 2018-10-01 -### Changed -- Fix the driver.Valuer handling, make it the last resort - -## [2.8.1] - 2018-09-27 -### Added -- CallTimeout option to set a per-statement OCI_ATTR_CALL_TIMEOUT. -- Allow login with " AS SYSASM", as requested in #100. - -### Changed -- Hash the password ("SECRET-sasdas=") in ConnectionParams.String(). - -## [2.8.0] - 2018-09-21 -### Added -- WrapRows wraps a driver.Rows (such as a returned cursor from a stored procedure) as an sql.Rows for easier handling. - -### Changed -- Do not allow events by default, make them opt-in with EnableEvents connection parameter - see #98. - -## [2.7.1] - 2018-09-17 -### Changed -- Inherit parent statement's Options for statements returned as sql.Out. - -## [2.7.0] - 2018-09-14 -### Changed -- Update ODPI-C to v3.0.0. - -## [2.6.0] - 2018-08-31 -### Changed -- convert named types to their underlying scalar values - see #96, using MagicTypeConversion() option. - -## [2.5.11] - 2018-08-30 -### Added -- Allow driver.Valuer as Query argument - see #94. - -## [2.5.10] - 2018-08-26 -### Changed -- use sergeymakinen/oracle-instant-client:12.2 docker for tests -- added ODPI-C and other licenses into LICENSE.md -- fill varInfo.ObjectType for better Object support - -## [2.5.9] - 2018-08-03 -### Added -- add CHANGELOG -- check that `len(dest) == len(rows.columns)` in `rows.Next(dest)` - -### Changed -- after a Break, don't release a stmt, that may fail with SIGSEGV - see #84. - -## [2.5.8] - 2018-07-27 -### Changed -- noConnectionPooling option became standaloneConnection - -## [2.5.7] - 2018-07-25 -### Added -- noConnectionPooling option to force not using a session pool - -## [2.5.6] - 2018-07-18 -### Changed -- use ODPI-C v2.4.2 -- remove all logging/printing of passwords - -## [2.5.5] - 2018-07-03 -### Added -- allow *int with nil value to be used as NULL - -## [2.5.4] - 2018-06-29 -### Added -- allow ReadOnly transactions - -## [2.5.3] - 2018-06-29 -### Changed -- decrease maxArraySize to be compilable on 32-bit architectures. - -### Removed -- remove C struct size Printf - -## [2.5.2] - 2018-06-22 -### Changed -- fix liveness check in statement.Close - -## [2.5.1] - 2018-06-15 -### Changed -- sid -> service_name in docs -- travis: 1.10.3 -- less embedding of structs, clearer API docs - -### Added -- support RETURNING from DML -- set timeouts on poolCreateParams - -## [2.5.0] - 2018-05-15 -### Changed -- update ODPI-C to v2.4.0 -- initialize context / load lib only on first Open, to allow import without Oracle Client installed -- use golangci-lint - - diff --git a/vendor/gopkg.in/goracle.v2/contrib/oracle-instant-client/Dockerfile b/vendor/gopkg.in/goracle.v2/contrib/oracle-instant-client/Dockerfile deleted file mode 100644 index dacc43c3afa9..000000000000 --- a/vendor/gopkg.in/goracle.v2/contrib/oracle-instant-client/Dockerfile +++ /dev/null @@ -1,24 +0,0 @@ -FROM ubuntu:16.04 - -LABEL maintainer="sergey@makinen.ru" - -ENV DEBIAN_FRONTEND noninteractive - -ENV ORACLE_INSTANTCLIENT_MAJOR 12.2 -ENV ORACLE_INSTANTCLIENT_VERSION 12.2.0.1.0 -ENV ORACLE /usr/local/oracle -ENV ORACLE_HOME $ORACLE/lib/oracle/$ORACLE_INSTANTCLIENT_MAJOR/client64 -ENV LD_LIBRARY_PATH $LD_LIBRARY_PATH:$ORACLE_HOME/lib -ENV C_INCLUDE_PATH $C_INCLUDE_PATH:$ORACLE/include/oracle/$ORACLE_INSTANTCLIENT_MAJOR/client64 - -RUN apt-get update && apt-get install -y libaio1 \ - curl rpm2cpio cpio \ - && mkdir $ORACLE && TMP_DIR="$(mktemp -d)" && cd "$TMP_DIR" \ - && curl -L https://github.com/sergeymakinen/docker-oracle-instant-client/raw/assets/oracle-instantclient$ORACLE_INSTANTCLIENT_MAJOR-basic-$ORACLE_INSTANTCLIENT_VERSION-1.x86_64.rpm -o basic.rpm \ - && rpm2cpio basic.rpm | cpio -i -d -v && cp -r usr/* $ORACLE && rm -rf ./* \ - && ln -s libclntsh.so.12.1 $ORACLE/lib/oracle/$ORACLE_INSTANTCLIENT_MAJOR/client64/lib/libclntsh.so.$ORACLE_INSTANTCLIENT_MAJOR \ - && ln -s libocci.so.12.1 $ORACLE/lib/oracle/$ORACLE_INSTANTCLIENT_MAJOR/client64/lib/libocci.so.$ORACLE_INSTANTCLIENT_MAJOR \ - && curl -L https://github.com/sergeymakinen/docker-oracle-instant-client/raw/assets/oracle-instantclient$ORACLE_INSTANTCLIENT_MAJOR-devel-$ORACLE_INSTANTCLIENT_VERSION-1.x86_64.rpm -o devel.rpm \ - && rpm2cpio devel.rpm | cpio -i -d -v && cp -r usr/* $ORACLE && rm -rf "$TMP_DIR" \ - && echo "$ORACLE_HOME/lib" > /etc/ld.so.conf.d/oracle.conf && chmod o+r /etc/ld.so.conf.d/oracle.conf && ldconfig \ - && rm -rf /var/lib/apt/lists/* && apt-get purge -y --auto-remove curl rpm2cpio cpio diff --git a/vendor/gopkg.in/goracle.v2/data.go b/vendor/gopkg.in/goracle.v2/data.go deleted file mode 100644 index 4feb4308970d..000000000000 --- a/vendor/gopkg.in/goracle.v2/data.go +++ /dev/null @@ -1,281 +0,0 @@ -// Copyright 2017 Tamás Gulácsi -// -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package goracle - -/* -#include -#include "dpiImpl.h" -*/ -import "C" -import ( - "database/sql/driver" - "fmt" - "time" - "unsafe" -) - -// Data holds the data to/from Oracle. -type Data struct { - ObjectType ObjectType - dpiData *C.dpiData - NativeTypeNum C.dpiNativeTypeNum -} - -// IsNull returns whether the data is null. -func (d *Data) IsNull() bool { - return d == nil || d.dpiData == nil || d.dpiData.isNull == 1 -} - -// GetBool returns the bool data. -func (d *Data) GetBool() bool { - return !d.IsNull() && C.dpiData_getBool(d.dpiData) == 1 -} - -// SetBool sets the data as bool. -func (d *Data) SetBool(b bool) { - var i C.int - if b { - i = 1 - } - C.dpiData_setBool(d.dpiData, i) -} - -// GetBytes returns the []byte from the data. -func (d *Data) GetBytes() []byte { - if d.IsNull() { - return nil - } - b := C.dpiData_getBytes(d.dpiData) - return ((*[32767]byte)(unsafe.Pointer(b.ptr)))[:b.length:b.length] -} - -// SetBytes set the data as []byte. -func (d *Data) SetBytes(b []byte) { - if b == nil { - d.dpiData.isNull = 1 - return - } - C.dpiData_setBytes(d.dpiData, (*C.char)(unsafe.Pointer(&b[0])), C.uint32_t(len(b))) -} - -// GetFloat32 gets float32 from the data. -func (d *Data) GetFloat32() float32 { - if d.IsNull() { - return 0 - } - return float32(C.dpiData_getFloat(d.dpiData)) -} - -// SetFloat32 sets the data as float32. -func (d *Data) SetFloat32(f float32) { - C.dpiData_setFloat(d.dpiData, C.float(f)) -} - -// GetFloat64 gets float64 from the data. -func (d *Data) GetFloat64() float64 { - //fmt.Println("GetFloat64", d.IsNull(), d) - if d.IsNull() { - return 0 - } - return float64(C.dpiData_getDouble(d.dpiData)) -} - -// SetFloat64 sets the data as float64. -func (d *Data) SetFloat64(f float64) { - C.dpiData_setDouble(d.dpiData, C.double(f)) -} - -// GetInt64 gets int64 from the data. -func (d *Data) GetInt64() int64 { - if d.IsNull() { - return 0 - } - return int64(C.dpiData_getInt64(d.dpiData)) -} - -// SetInt64 sets the data as int64. -func (d *Data) SetInt64(i int64) { - C.dpiData_setInt64(d.dpiData, C.int64_t(i)) -} - -// GetIntervalDS gets duration as interval date-seconds from data. -func (d *Data) GetIntervalDS() time.Duration { - if d.IsNull() { - return 0 - } - ds := C.dpiData_getIntervalDS(d.dpiData) - return time.Duration(ds.days)*24*time.Hour + - time.Duration(ds.hours)*time.Hour + - time.Duration(ds.minutes)*time.Minute + - time.Duration(ds.seconds)*time.Second + - time.Duration(ds.fseconds) -} - -// SetIntervalDS sets the duration as interval date-seconds to data. -func (d *Data) SetIntervalDS(dur time.Duration) { - C.dpiData_setIntervalDS(d.dpiData, - C.int32_t(int64(dur.Hours())/24), - C.int32_t(int64(dur.Hours())%24), C.int32_t(dur.Minutes()), C.int32_t(dur.Seconds()), - C.int32_t(dur.Nanoseconds()), - ) -} - -// GetIntervalYM gets IntervalYM from the data. -func (d *Data) GetIntervalYM() IntervalYM { - if d.IsNull() { - return IntervalYM{} - } - ym := C.dpiData_getIntervalYM(d.dpiData) - return IntervalYM{Years: int(ym.years), Months: int(ym.months)} -} - -// SetIntervalYM sets IntervalYM to the data. -func (d *Data) SetIntervalYM(ym IntervalYM) { - C.dpiData_setIntervalYM(d.dpiData, C.int32_t(ym.Years), C.int32_t(ym.Months)) -} - -// GetLob gets data as Lob. -func (d *Data) GetLob() *Lob { - if d.IsNull() { - return nil - } - return &Lob{Reader: &dpiLobReader{dpiLob: C.dpiData_getLOB(d.dpiData)}} -} - -// GetObject gets Object from data. -func (d *Data) GetObject() *Object { - if d == nil || d.dpiData == nil { - panic("null") - } - if d.IsNull() { - return nil - } - - o := C.dpiData_getObject(d.dpiData) - if o == nil { - return nil - } - obj := &Object{dpiObject: o, ObjectType: d.ObjectType} - obj.init() - return obj -} - -// SetObject sets Object to data. -func (d *Data) SetObject(o *Object) { - C.dpiData_setObject(d.dpiData, o.dpiObject) -} - -// GetStmt gets Stmt from data. -func (d *Data) GetStmt() driver.Stmt { - if d.IsNull() { - return nil - } - return &statement{dpiStmt: C.dpiData_getStmt(d.dpiData)} -} - -// SetStmt sets Stmt to data. -func (d *Data) SetStmt(s *statement) { - C.dpiData_setStmt(d.dpiData, s.dpiStmt) -} - -// GetTime gets Time from data. -func (d *Data) GetTime() time.Time { - if d.IsNull() { - return time.Time{} - } - ts := C.dpiData_getTimestamp(d.dpiData) - return time.Date( - int(ts.year), time.Month(ts.month), int(ts.day), - int(ts.hour), int(ts.minute), int(ts.second), int(ts.fsecond), - timeZoneFor(ts.tzHourOffset, ts.tzMinuteOffset), - ) - -} - -// SetTime sets Time to data. -func (d *Data) SetTime(t time.Time) { - _, z := t.Zone() - C.dpiData_setTimestamp(d.dpiData, - C.int16_t(t.Year()), C.uint8_t(t.Month()), C.uint8_t(t.Day()), - C.uint8_t(t.Hour()), C.uint8_t(t.Minute()), C.uint8_t(t.Second()), C.uint32_t(t.Nanosecond()), - C.int8_t(z/3600), C.int8_t((z%3600)/60), - ) -} - -// GetUint64 gets data as uint64. -func (d *Data) GetUint64() uint64 { - if d.IsNull() { - return 0 - } - return uint64(C.dpiData_getUint64(d.dpiData)) -} - -// SetUint64 sets data to uint64. -func (d *Data) SetUint64(u uint64) { - C.dpiData_setUint64(d.dpiData, C.uint64_t(u)) -} - -// IntervalYM holds Years and Months as interval. -type IntervalYM struct { - Years, Months int -} - -// Get returns the contents of Data. -func (d *Data) Get() interface{} { - switch d.NativeTypeNum { - case C.DPI_NATIVE_TYPE_BOOLEAN: - return d.GetBool() - case C.DPI_NATIVE_TYPE_BYTES: - return d.GetBytes() - case C.DPI_NATIVE_TYPE_DOUBLE: - return d.GetFloat64() - case C.DPI_NATIVE_TYPE_FLOAT: - return d.GetFloat32() - case C.DPI_NATIVE_TYPE_INT64: - return d.GetInt64() - case C.DPI_NATIVE_TYPE_INTERVAL_DS: - return d.GetIntervalDS() - case C.DPI_NATIVE_TYPE_INTERVAL_YM: - return d.GetIntervalYM() - case C.DPI_NATIVE_TYPE_LOB: - return d.GetLob() - case C.DPI_NATIVE_TYPE_OBJECT: - return d.GetObject() - case C.DPI_NATIVE_TYPE_STMT: - return d.GetStmt() - case C.DPI_NATIVE_TYPE_TIMESTAMP: - return d.GetTime() - case C.DPI_NATIVE_TYPE_UINT64: - return d.GetUint64() - default: - panic(fmt.Sprintf("unknown NativeTypeNum=%d", d.NativeTypeNum)) - } -} - -// IsObject returns whether the data contains an Object or not. -func (d *Data) IsObject() bool { - return d.NativeTypeNum == C.DPI_NATIVE_TYPE_OBJECT -} - -func (d *Data) reset() { - d.NativeTypeNum = 0 - d.ObjectType = ObjectType{} - if d.dpiData == nil { - d.dpiData = &C.dpiData{} - } else { - d.SetBytes(nil) - } -} diff --git a/vendor/gopkg.in/goracle.v2/drv.go b/vendor/gopkg.in/goracle.v2/drv.go deleted file mode 100644 index 89cee453dff9..000000000000 --- a/vendor/gopkg.in/goracle.v2/drv.go +++ /dev/null @@ -1,912 +0,0 @@ -// Copyright 2019 Tamás Gulácsi -// -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Package goracle is a database/sql/driver for Oracle DB. -// -// The connection string for the sql.Open("goracle", connString) call can be -// the simple -// login/password@sid [AS SYSDBA|AS SYSOPER] -// -// type (with sid being the sexp returned by tnsping), -// or in the form of -// ora://login:password@sid/? \ -// sysdba=0& \ -// sysoper=0& \ -// poolMinSessions=1& \ -// poolMaxSessions=1000& \ -// poolIncrement=1& \ -// connectionClass=POOLED& \ -// standaloneConnection=0& \ -// enableEvents=0& \ -// heterogeneousPool=0& \ -// prelim=0 -// -// These are the defaults. Many advocate that a static session pool (min=max, incr=0) -// is better, with 1-10 sessions per CPU thread. -// See http://docs.oracle.com/cd/E82638_01/JJUCP/optimizing-real-world-performance.htm#JJUCP-GUID-BC09F045-5D80-4AF5-93F5-FEF0531E0E1D -// You may also use ConnectionParams to configure a connection. -// -// If you specify connectionClass, that'll reuse the same session pool -// without the connectionClass, but will specify it on each session acquire. -// Thus you can cluster the session pool with classes, or use POOLED for DRCP. -package goracle - -/* -#cgo CFLAGS: -I./odpi/include -I./odpi/src -I./odpi/embed - -#include - -#include "dpi.c" -*/ -import "C" - -import ( - "context" - "database/sql" - "database/sql/driver" - "encoding/base64" - "fmt" - "hash/fnv" - "io" - "math" - "net/url" - "strconv" - "strings" - "sync" - "time" - "unsafe" - - "github.com/pkg/errors" -) - -const ( - // DefaultFetchRowCount is the number of prefetched rows by default (if not changed through FetchRowCount statement option). - DefaultFetchRowCount = 1 << 8 - - // DefaultArraySize is the length of the maximum PL/SQL array by default (if not changed through ArraySize statement option). - DefaultArraySize = 1 << 10 -) - -const ( - // DpiMajorVersion is the wanted major version of the underlying ODPI-C library. - DpiMajorVersion = C.DPI_MAJOR_VERSION - // DpiMinorVersion is the wanted minor version of the underlying ODPI-C library. - DpiMinorVersion = C.DPI_MINOR_VERSION - - // DriverName is set on the connection to be seen in the DB - DriverName = "gopkg.in/goracle.v2 : " + Version - - // DefaultPoolMinSessions specifies the default value for minSessions for pool creation. - DefaultPoolMinSessions = 1 - // DefaultPoolMaxSessions specifies the default value for maxSessions for pool creation. - DefaultPoolMaxSessions = 1000 - // DefaultPoolIncrement specifies the default value for increment for pool creation. - DefaultPoolIncrement = 1 - // DefaultConnectionClass is the default connectionClass - DefaultConnectionClass = "GORACLE" - // NoConnectionPoolingConnectionClass is a special connection class name to indicate no connection pooling. - // It is the same as setting standaloneConnection=1 - NoConnectionPoolingConnectionClass = "NO-CONNECTION-POOLING" -) - -// Number as string -type Number string - -var ( - // Int64 for converting to-from int64. - Int64 = intType{} - // Float64 for converting to-from float64. - Float64 = floatType{} - // Num for converting to-from Number (string) - Num = numType{} -) - -type intType struct{} - -func (intType) String() string { return "Int64" } -func (intType) ConvertValue(v interface{}) (driver.Value, error) { - if Log != nil { - Log("ConvertValue", "Int64", "value", v) - } - switch x := v.(type) { - case int8: - return int64(x), nil - case int16: - return int64(x), nil - case int32: - return int64(x), nil - case int64: - return x, nil - case uint16: - return int64(x), nil - case uint32: - return int64(x), nil - case uint64: - return int64(x), nil - case float32: - if _, f := math.Modf(float64(x)); f != 0 { - return int64(x), errors.Errorf("non-zero fractional part: %f", f) - } - return int64(x), nil - case float64: - if _, f := math.Modf(x); f != 0 { - return int64(x), errors.Errorf("non-zero fractional part: %f", f) - } - return int64(x), nil - case string: - if x == "" { - return 0, nil - } - return strconv.ParseInt(x, 10, 64) - case Number: - if x == "" { - return 0, nil - } - return strconv.ParseInt(string(x), 10, 64) - default: - return nil, errors.Errorf("unknown type %T", v) - } -} - -type floatType struct{} - -func (floatType) String() string { return "Float64" } -func (floatType) ConvertValue(v interface{}) (driver.Value, error) { - if Log != nil { - Log("ConvertValue", "Float64", "value", v) - } - switch x := v.(type) { - case int8: - return float64(x), nil - case int16: - return float64(x), nil - case int32: - return float64(x), nil - case uint16: - return float64(x), nil - case uint32: - return float64(x), nil - case int64: - return float64(x), nil - case uint64: - return float64(x), nil - case float32: - return float64(x), nil - case float64: - return x, nil - case string: - if x == "" { - return 0, nil - } - return strconv.ParseFloat(x, 64) - case Number: - if x == "" { - return 0, nil - } - return strconv.ParseFloat(string(x), 64) - default: - return nil, errors.Errorf("unknown type %T", v) - } -} - -type numType struct{} - -func (numType) String() string { return "Num" } -func (numType) ConvertValue(v interface{}) (driver.Value, error) { - if Log != nil { - Log("ConvertValue", "Num", "value", v) - } - switch x := v.(type) { - case string: - if x == "" { - return 0, nil - } - return x, nil - case Number: - if x == "" { - return 0, nil - } - return string(x), nil - case int8, int16, int32, int64, uint16, uint32, uint64: - return fmt.Sprintf("%d", x), nil - case float32, float64: - return fmt.Sprintf("%f", x), nil - default: - return nil, errors.Errorf("unknown type %T", v) - } -} -func (n Number) String() string { return string(n) } - -// Value returns the Number as driver.Value -func (n Number) Value() (driver.Value, error) { - return string(n), nil -} - -// Scan into the Number from a driver.Value. -func (n *Number) Scan(v interface{}) error { - if v == nil { - *n = "" - return nil - } - switch x := v.(type) { - case string: - *n = Number(x) - case Number: - *n = x - case int8, int16, int32, int64, uint16, uint32, uint64: - *n = Number(fmt.Sprintf("%d", x)) - case float32, float64: - *n = Number(fmt.Sprintf("%f", x)) - default: - return errors.Errorf("unknown type %T", v) - } - return nil -} - -// MarshalText marshals a Number to text. -func (n Number) MarshalText() ([]byte, error) { return []byte(n), nil } - -// UnmarshalText parses text into a Number. -func (n *Number) UnmarshalText(p []byte) error { - var dotNum int - for i, c := range p { - if !(c == '-' && i == 0 || '0' <= c && c <= '9') { - if c == '.' { - dotNum++ - if dotNum == 1 { - continue - } - } - return errors.Errorf("unknown char %c in %q", c, p) - } - } - *n = Number(p) - return nil -} - -// MarshalJSON marshals a Number into a JSON string. -func (n Number) MarshalJSON() ([]byte, error) { - b, err := n.MarshalText() - b2 := make([]byte, 1, 1+len(b)+1) - b2[0] = '"' - b2 = append(b2, b...) - b2 = append(b2, '"') - return b2, err -} - -// UnmarshalJSON parses a JSON string into the Number. -func (n *Number) UnmarshalJSON(p []byte) error { - *n = Number("") - if len(p) == 0 { - return nil - } - if len(p) > 2 && p[0] == '"' && p[len(p)-1] == '"' { - p = p[1 : len(p)-1] - } - return n.UnmarshalText(p) -} - -// Log function. By default, it's nil, and thus logs nothing. -// If you want to change this, change it to a github.com/go-kit/kit/log.Swapper.Log -// or analog to be race-free. -var Log func(...interface{}) error - -var defaultDrv *drv - -func init() { - defaultDrv = newDrv() - sql.Register("goracle", defaultDrv) -} - -func newDrv() *drv { - return &drv{pools: make(map[string]*connPool)} -} - -var _ = driver.Driver((*drv)(nil)) - -type drv struct { - clientVersion VersionInfo - mu sync.Mutex - dpiContext *C.dpiContext - pools map[string]*connPool -} - -type connPool struct { - dpiPool *C.dpiPool - serverVersion VersionInfo - timeZone *time.Location - tzOffSecs int -} - -func (d *drv) init() error { - d.mu.Lock() - defer d.mu.Unlock() - if d.dpiContext != nil { - return nil - } - var errInfo C.dpiErrorInfo - var dpiCtx *C.dpiContext - if C.dpiContext_create(C.uint(DpiMajorVersion), C.uint(DpiMinorVersion), - (**C.dpiContext)(unsafe.Pointer(&dpiCtx)), &errInfo, - ) == C.DPI_FAILURE { - return fromErrorInfo(errInfo) - } - d.dpiContext = dpiCtx - - var v C.dpiVersionInfo - if C.dpiContext_getClientVersion(d.dpiContext, &v) == C.DPI_FAILURE { - return errors.Wrap(d.getError(), "getClientVersion") - } - d.clientVersion.set(&v) - return nil -} - -// Open returns a new connection to the database. -// The name is a string in a driver-specific format. -func (d *drv) Open(connString string) (driver.Conn, error) { - P, err := ParseConnString(connString) - if err != nil { - return nil, err - } - - conn, err := d.openConn(P) - return conn, maybeBadConn(err) -} - -func (d *drv) ClientVersion() (VersionInfo, error) { - return d.clientVersion, nil -} - -func (d *drv) openConn(P ConnectionParams) (*conn, error) { - if err := d.init(); err != nil { - return nil, err - } - - c := conn{drv: d, connParams: P} - connString := P.String() - - defer func() { - d.mu.Lock() - if Log != nil { - Log("pools", d.pools, "conn", P.String()) - } - d.mu.Unlock() - }() - - authMode := C.dpiAuthMode(C.DPI_MODE_AUTH_DEFAULT) - // OR all the modes together - for _, elt := range []struct { - Is bool - Mode C.dpiAuthMode - }{ - {P.IsSysDBA, C.DPI_MODE_AUTH_SYSDBA}, - {P.IsSysOper, C.DPI_MODE_AUTH_SYSOPER}, - {P.IsSysASM, C.DPI_MODE_AUTH_SYSASM}, - {P.IsPrelim, C.DPI_MODE_AUTH_PRELIM}, - } { - if elt.Is { - authMode |= elt.Mode - } - } - if P.IsPrelim { - // The shared memory may not exist when Oracle is shut down. - P.ConnClass = "" - } - - extAuth := C.int(b2i(P.Username == "" && P.Password == "")) - var connCreateParams C.dpiConnCreateParams - if C.dpiContext_initConnCreateParams(d.dpiContext, &connCreateParams) == C.DPI_FAILURE { - return nil, errors.Wrap(d.getError(), "initConnCreateParams") - } - connCreateParams.authMode = authMode - connCreateParams.externalAuth = extAuth - if P.ConnClass != "" { - cConnClass := C.CString(P.ConnClass) - defer C.free(unsafe.Pointer(cConnClass)) - connCreateParams.connectionClass = cConnClass - connCreateParams.connectionClassLength = C.uint32_t(len(P.ConnClass)) - } - if !(P.IsSysDBA || P.IsSysOper || P.IsSysASM || P.IsPrelim || P.StandaloneConnection) { - d.mu.Lock() - dp := d.pools[connString] - d.mu.Unlock() - if dp != nil { - //Proxy authenticated connections to database will be provided by methods with context - c.Client, c.Server = d.clientVersion, dp.serverVersion - c.timeZone, c.tzOffSecs = dp.timeZone, dp.tzOffSecs - if err := c.acquireConn("", ""); err != nil { - return nil, err - } - err := c.init() - if err == nil { - dp.serverVersion = c.Server - dp.timeZone, dp.tzOffSecs = c.timeZone, c.tzOffSecs - } - return &c, err - } - } - - var cUserName, cPassword *C.char - if !(P.Username == "" && P.Password == "") { - cUserName, cPassword = C.CString(P.Username), C.CString(P.Password) - } - var cSid *C.char - if P.SID != "" { - cSid = C.CString(P.SID) - } - cUTF8, cConnClass := C.CString("AL32UTF8"), C.CString(P.ConnClass) - cDriverName := C.CString(DriverName) - defer func() { - if cUserName != nil { - C.free(unsafe.Pointer(cUserName)) - C.free(unsafe.Pointer(cPassword)) - } - if cSid != nil { - C.free(unsafe.Pointer(cSid)) - } - C.free(unsafe.Pointer(cUTF8)) - C.free(unsafe.Pointer(cConnClass)) - C.free(unsafe.Pointer(cDriverName)) - }() - var commonCreateParams C.dpiCommonCreateParams - if C.dpiContext_initCommonCreateParams(d.dpiContext, &commonCreateParams) == C.DPI_FAILURE { - return nil, errors.Wrap(d.getError(), "initCommonCreateParams") - } - commonCreateParams.createMode = C.DPI_MODE_CREATE_DEFAULT | C.DPI_MODE_CREATE_THREADED - if P.EnableEvents { - commonCreateParams.createMode |= C.DPI_MODE_CREATE_EVENTS - } - commonCreateParams.encoding = cUTF8 - commonCreateParams.nencoding = cUTF8 - commonCreateParams.driverName = cDriverName - commonCreateParams.driverNameLength = C.uint32_t(len(DriverName)) - - if P.IsSysDBA || P.IsSysOper || P.IsSysASM || P.IsPrelim || P.StandaloneConnection { - dc := C.malloc(C.sizeof_void) - if Log != nil { - Log("C", "dpiConn_create", "params", P.String(), "common", commonCreateParams, "conn", connCreateParams) - } - if C.dpiConn_create( - d.dpiContext, - cUserName, C.uint32_t(len(P.Username)), - cPassword, C.uint32_t(len(P.Password)), - cSid, C.uint32_t(len(P.SID)), - &commonCreateParams, - &connCreateParams, - (**C.dpiConn)(unsafe.Pointer(&dc)), - ) == C.DPI_FAILURE { - C.free(unsafe.Pointer(dc)) - return nil, errors.Wrapf(d.getError(), "username=%q sid=%q params=%+v", P.Username, P.SID, connCreateParams) - } - c.dpiConn = (*C.dpiConn)(dc) - c.currentUser = P.Username - c.newSession = true - err := c.init() - return &c, err - } - var poolCreateParams C.dpiPoolCreateParams - if C.dpiContext_initPoolCreateParams(d.dpiContext, &poolCreateParams) == C.DPI_FAILURE { - return nil, errors.Wrap(d.getError(), "initPoolCreateParams") - } - poolCreateParams.minSessions = C.uint32_t(P.MinSessions) - poolCreateParams.maxSessions = C.uint32_t(P.MaxSessions) - poolCreateParams.sessionIncrement = C.uint32_t(P.PoolIncrement) - if extAuth == 1 || P.HeterogeneousPool { - poolCreateParams.homogeneous = 0 - } - poolCreateParams.externalAuth = extAuth - poolCreateParams.getMode = C.DPI_MODE_POOL_GET_TIMEDWAIT - poolCreateParams.timeout = 300 // seconds before idle pool sessions got evicted - poolCreateParams.waitTimeout = 3 * 1000 // milliseconds to wait for a session become available - poolCreateParams.maxLifetimeSession = 3600 // maximum time in seconds till a pooled session may exist - - var dp *C.dpiPool - if Log != nil { - Log("C", "dpiPool_create", "username", P.Username, "sid", P.SID, "common", commonCreateParams, "pool", poolCreateParams) - } - //fmt.Println("POOL create", connString) - if C.dpiPool_create( - d.dpiContext, - cUserName, C.uint32_t(len(P.Username)), - cPassword, C.uint32_t(len(P.Password)), - cSid, C.uint32_t(len(P.SID)), - &commonCreateParams, - &poolCreateParams, - (**C.dpiPool)(unsafe.Pointer(&dp)), - ) == C.DPI_FAILURE { - return nil, errors.Wrapf(d.getError(), "params=%s extAuth=%v", P.String(), extAuth) - } - C.dpiPool_setStmtCacheSize(dp, 40) - d.mu.Lock() - d.pools[connString] = &connPool{dpiPool: dp} - d.mu.Unlock() - - return d.openConn(P) -} - -func (c *conn) acquireConn(user, pass string) error { - var connCreateParams C.dpiConnCreateParams - if C.dpiContext_initConnCreateParams(c.dpiContext, &connCreateParams) == C.DPI_FAILURE { - return errors.Wrap(c.getError(), "initConnCreateParams") - } - - dc := C.malloc(C.sizeof_void) - if Log != nil { - Log("C", "dpiPool_acquirePoolConnection", "conn", connCreateParams) - } - var cUserName, cPassword *C.char - defer func() { - if cUserName != nil { - C.free(unsafe.Pointer(cUserName)) - } - if cPassword != nil { - C.free(unsafe.Pointer(cPassword)) - } - }() - if user != "" { - cUserName = C.CString(user) - } - if pass != "" { - cPassword = C.CString(pass) - } - - c.drv.mu.Lock() - pool := c.pools[c.connParams.String()] - c.drv.mu.Unlock() - if C.dpiPool_acquireConnection( - pool.dpiPool, - cUserName, C.uint32_t(len(user)), cPassword, C.uint32_t(len(pass)), - &connCreateParams, - (**C.dpiConn)(unsafe.Pointer(&dc)), - ) == C.DPI_FAILURE { - C.free(unsafe.Pointer(dc)) - return errors.Wrapf(c.getError(), "acquirePoolConnection") - } - - c.dpiConn = (*C.dpiConn)(dc) - c.currentUser = user - c.newSession = connCreateParams.outNewSession == 1 - c.Client, c.Server = c.drv.clientVersion, pool.serverVersion - c.timeZone, c.tzOffSecs = pool.timeZone, pool.tzOffSecs - err := c.init() - if err == nil { - pool.serverVersion = c.Server - pool.timeZone, pool.tzOffSecs = c.timeZone, c.tzOffSecs - } - - return err -} - -// ConnectionParams holds the params for a connection (pool). -// You can use ConnectionParams{...}.StringWithPassword() -// as a connection string in sql.Open. -type ConnectionParams struct { - Username, Password, SID, ConnClass string - MinSessions, MaxSessions, PoolIncrement int - IsSysDBA, IsSysOper, IsSysASM, IsPrelim bool - HeterogeneousPool bool - StandaloneConnection bool - EnableEvents bool -} - -// String returns the string representation of ConnectionParams. -// The password is replaced with a "SECRET" string! -func (P ConnectionParams) String() string { - return P.string(true, false) -} - -// StringNoClass returns the string representation of ConnectionParams, without class info. -// The password is replaced with a "SECRET" string! -func (P ConnectionParams) StringNoClass() string { - return P.string(false, false) -} - -// StringWithPassword returns the string representation of ConnectionParams (as String() does), -// but does NOT obfuscate the password, just prints it as is. -func (P ConnectionParams) StringWithPassword() string { - return P.string(true, true) -} - -func (P ConnectionParams) string(class, withPassword bool) string { - host, path := P.SID, "" - if i := strings.IndexByte(host, '/'); i >= 0 { - host, path = host[:i], host[i:] - } - cc := "" - if class { - cc = fmt.Sprintf("connectionClass=%s&", url.QueryEscape(P.ConnClass)) - } - // params should be sorted lexicographically - password := P.Password - if !withPassword { - hsh := fnv.New64() - io.WriteString(hsh, P.Password) - password = "SECRET-" + base64.URLEncoding.EncodeToString(hsh.Sum(nil)) - } - return (&url.URL{ - Scheme: "oracle", - User: url.UserPassword(P.Username, password), - Host: host, - Path: path, - RawQuery: cc + - fmt.Sprintf("poolIncrement=%d&poolMaxSessions=%d&poolMinSessions=%d&"+ - "sysdba=%d&sysoper=%d&sysasm=%d&"+ - "standaloneConnection=%d&enableEvents=%d&"+ - "heterogeneousPool=%d&prelim=%d", - P.PoolIncrement, P.MaxSessions, P.MinSessions, - b2i(P.IsSysDBA), b2i(P.IsSysOper), b2i(P.IsSysASM), - b2i(P.StandaloneConnection), b2i(P.EnableEvents), - b2i(P.HeterogeneousPool), b2i(P.IsPrelim), - ), - }).String() -} - -// ParseConnString parses the given connection string into a struct. -func ParseConnString(connString string) (ConnectionParams, error) { - P := ConnectionParams{ - MinSessions: DefaultPoolMinSessions, - MaxSessions: DefaultPoolMaxSessions, - PoolIncrement: DefaultPoolIncrement, - ConnClass: DefaultConnectionClass, - } - if !strings.HasPrefix(connString, "oracle://") { - i := strings.IndexByte(connString, '/') - if i < 0 { - return P, errors.Errorf("no '/' in connection string") - } - P.Username, connString = connString[:i], connString[i+1:] - - uSid := strings.ToUpper(connString) - //fmt.Printf("connString=%q SID=%q\n", connString, uSid) - if strings.Contains(uSid, " AS ") { - if P.IsSysDBA = strings.HasSuffix(uSid, " AS SYSDBA"); P.IsSysDBA { - connString = connString[:len(connString)-10] - } else if P.IsSysOper = strings.HasSuffix(uSid, " AS SYSOPER"); P.IsSysOper { - connString = connString[:len(connString)-11] - } else if P.IsSysASM = strings.HasSuffix(uSid, " AS SYSASM"); P.IsSysASM { - connString = connString[:len(connString)-10] - } - } - if i = strings.IndexByte(connString, '@'); i >= 0 { - P.Password, P.SID = connString[:i], connString[i+1:] - } else { - P.Password = connString - } - if strings.HasSuffix(P.SID, ":POOLED") { - P.ConnClass, P.SID = "POOLED", P.SID[:len(P.SID)-7] - } - //fmt.Printf("connString=%q params=%s\n", connString, P) - return P, nil - } - u, err := url.Parse(connString) - if err != nil { - return P, errors.Wrap(err, connString) - } - if usr := u.User; usr != nil { - P.Username = usr.Username() - P.Password, _ = usr.Password() - } - P.SID = u.Hostname() - if u.Port() != "" { - P.SID += ":" + u.Port() - } - if u.Path != "" && u.Path != "/" { - P.SID += u.Path - } - q := u.Query() - if vv, ok := q["connectionClass"]; ok { - P.ConnClass = vv[0] - } - for _, task := range []struct { - Dest *bool - Key string - }{ - {&P.IsSysDBA, "sysdba"}, - {&P.IsSysOper, "sysoper"}, - {&P.IsSysASM, "sysasm"}, - {&P.IsPrelim, "prelim"}, - - {&P.StandaloneConnection, "standaloneConnection"}, - {&P.EnableEvents, "enableEvents"}, - {&P.HeterogeneousPool, "heterogeneousPool"}, - } { - *task.Dest = q.Get(task.Key) == "1" - } - P.StandaloneConnection = P.StandaloneConnection || P.ConnClass == NoConnectionPoolingConnectionClass - if P.IsPrelim { - P.ConnClass = "" - } - - for _, task := range []struct { - Dest *int - Key string - }{ - {&P.MinSessions, "poolMinSessions"}, - {&P.MaxSessions, "poolMaxSessions"}, - {&P.PoolIncrement, "poolIncrement"}, - } { - s := q.Get(task.Key) - if s == "" { - continue - } - var err error - *task.Dest, err = strconv.Atoi(s) - if err != nil { - return P, errors.Wrap(err, task.Key+"="+s) - } - } - if P.MinSessions > P.MaxSessions { - P.MinSessions = P.MaxSessions - } - if P.MinSessions == P.MaxSessions { - P.PoolIncrement = 0 - } else if P.PoolIncrement < 1 { - P.PoolIncrement = 1 - } - - return P, nil -} - -// OraErr is an error holding the ORA-01234 code and the message. -type OraErr struct { - message string - code int -} - -var _ = error((*OraErr)(nil)) - -// Code returns the OraErr's error code. -func (oe *OraErr) Code() int { return oe.code } - -// Message returns the OraErr's message. -func (oe *OraErr) Message() string { return oe.message } -func (oe *OraErr) Error() string { - msg := oe.Message() - if oe.code == 0 && msg == "" { - return "" - } - return fmt.Sprintf("ORA-%05d: %s", oe.code, oe.message) -} -func fromErrorInfo(errInfo C.dpiErrorInfo) *OraErr { - oe := OraErr{ - code: int(errInfo.code), - message: strings.TrimSpace(C.GoString(errInfo.message)), - } - if oe.code == 0 && strings.HasPrefix(oe.message, "ORA-") && - len(oe.message) > 9 && oe.message[9] == ':' { - if i, _ := strconv.Atoi(oe.message[4:9]); i > 0 { - oe.code = i - } - } - oe.message = strings.TrimPrefix(oe.message, fmt.Sprintf("ORA-%05d: ", oe.Code())) - return &oe -} - -// newErrorInfo is just for testing: testing cannot use Cgo... -func newErrorInfo(code int, message string) C.dpiErrorInfo { - return C.dpiErrorInfo{code: C.int32_t(code), message: C.CString(message)} -} - -// against deadcode -var _ = newErrorInfo - -func (d *drv) getError() *OraErr { - if d == nil || d.dpiContext == nil { - return &OraErr{code: -12153, message: driver.ErrBadConn.Error()} - } - var errInfo C.dpiErrorInfo - C.dpiContext_getError(d.dpiContext, &errInfo) - return fromErrorInfo(errInfo) -} - -func b2i(b bool) uint8 { - if b { - return 1 - } - return 0 -} - -// VersionInfo holds version info returned by Oracle DB. -type VersionInfo struct { - ServerRelease string - Version, Release, Update, PortRelease, PortUpdate, Full int -} - -func (V *VersionInfo) set(v *C.dpiVersionInfo) { - *V = VersionInfo{ - Version: int(v.versionNum), - Release: int(v.releaseNum), Update: int(v.updateNum), - PortRelease: int(v.portReleaseNum), PortUpdate: int(v.portUpdateNum), - Full: int(v.fullVersionNum), - } -} -func (V VersionInfo) String() string { - var s string - if V.ServerRelease != "" { - s = " [" + V.ServerRelease + "]" - } - return fmt.Sprintf("%d.%d.%d.%d.%d%s", V.Version, V.Release, V.Update, V.PortRelease, V.PortUpdate, s) -} - -var timezones = make(map[[2]C.int8_t]*time.Location) -var timezonesMu sync.RWMutex - -func timeZoneFor(hourOffset, minuteOffset C.int8_t) *time.Location { - if hourOffset == 0 && minuteOffset == 0 { - return time.UTC - } - key := [2]C.int8_t{hourOffset, minuteOffset} - timezonesMu.RLock() - tz := timezones[key] - timezonesMu.RUnlock() - if tz == nil { - timezonesMu.Lock() - if tz = timezones[key]; tz == nil { - tz = time.FixedZone( - fmt.Sprintf("%02d:%02d", hourOffset, minuteOffset), - int(hourOffset)*3600+int(minuteOffset)*60, - ) - timezones[key] = tz - } - timezonesMu.Unlock() - } - return tz -} - -type ctxKey string - -const logCtxKey = ctxKey("goracle.Log") - -type logFunc func(...interface{}) error - -func ctxGetLog(ctx context.Context) logFunc { - if lgr, ok := ctx.Value(logCtxKey).(func(...interface{}) error); ok { - return lgr - } - return Log -} - -// ContextWithLog returns a context with the given log function. -func ContextWithLog(ctx context.Context, logF func(...interface{}) error) context.Context { - return context.WithValue(ctx, logCtxKey, logF) -} - -func parseTZ(s string) (int, error) { - s = strings.TrimSpace(s) - if s == "" { - return 0, io.EOF - } - if s == "Z" || s == "UTC" { - return 0, nil - } - var tz int - if i := strings.IndexByte(s, ':'); i >= 0 { - if i64, err := strconv.ParseInt(s[i+1:], 10, 6); err != nil { - return tz, errors.Wrap(err, s) - } else { - tz = int(i64) - } - s = s[:i] - } - if i64, err := strconv.ParseInt(s, 10, 5); err != nil { - return tz, errors.Wrap(err, s) - } else { - if i64 < 0 { - tz = -tz - } - tz += int(i64 * 3600) - } - return tz, nil -} diff --git a/vendor/gopkg.in/goracle.v2/drv_10.go b/vendor/gopkg.in/goracle.v2/drv_10.go deleted file mode 100644 index e6fd6941aca4..000000000000 --- a/vendor/gopkg.in/goracle.v2/drv_10.go +++ /dev/null @@ -1,106 +0,0 @@ -// +build go1.10 - -// Copyright 2019 Tamás Gulácsi -// -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package goracle - -import ( - "context" - "database/sql/driver" - "fmt" - "strings" - - "github.com/pkg/errors" -) - -var _ = driver.Connector((*connector)(nil)) - -type connector struct { - ConnectionParams - *drv - onInit func(driver.Conn) error -} - -// OpenConnector must parse the name in the same format that Driver.Open -// parses the name parameter. -func (d *drv) OpenConnector(name string) (driver.Connector, error) { - P, err := ParseConnString(name) - if err != nil { - return nil, err - } - - return connector{ConnectionParams: P, drv: d}, nil -} - -// Connect returns a connection to the database. -// Connect may return a cached connection (one previously -// closed), but doing so is unnecessary; the sql package -// maintains a pool of idle connections for efficient re-use. -// -// The provided context.Context is for dialing purposes only -// (see net.DialContext) and should not be stored or used for -// other purposes. -// -// The returned connection is only used by one goroutine at a -// time. -func (c connector) Connect(context.Context) (driver.Conn, error) { - conn, err := c.drv.openConn(c.ConnectionParams) - if err != nil || c.onInit == nil || !conn.newSession { - return conn, err - } - if err = c.onInit(conn); err != nil { - conn.Close() - return nil, err - } - return conn, nil -} - -// Driver returns the underlying Driver of the Connector, -// mainly to maintain compatibility with the Driver method -// on sql.DB. -func (c connector) Driver() driver.Driver { return c.drv } - -// NewConnector returns a driver.Connector to be used with sql.OpenDB, -// which calls the given onInit if the connection is new. -func NewConnector(name string, onInit func(driver.Conn) error) (driver.Connector, error) { - cxr, err := defaultDrv.OpenConnector(name) - if err != nil { - return nil, err - } - cx := cxr.(connector) - cx.onInit = onInit - return cx, err -} - -// NewSessionIniter returns a function suitable for use in NewConnector as onInit, -// which calls "ALTER SESSION SET =''" for each element of the given map. -func NewSessionIniter(m map[string]string) func(driver.Conn) error { - return func(cx driver.Conn) error { - for k, v := range m { - qry := fmt.Sprintf("ALTER SESSION SET %s = '%s'", k, strings.ReplaceAll(v, "'", "''")) - st, err := cx.Prepare(qry) - if err != nil { - return errors.Wrap(err, qry) - } - _, err = st.Exec(nil) //nolint:SA1019 - st.Close() - if err != nil { - return err - } - } - return nil - } -} diff --git a/vendor/gopkg.in/goracle.v2/drv_posix.go b/vendor/gopkg.in/goracle.v2/drv_posix.go deleted file mode 100644 index 0f3de24337e1..000000000000 --- a/vendor/gopkg.in/goracle.v2/drv_posix.go +++ /dev/null @@ -1,21 +0,0 @@ -// +build !windows - -// Copyright 2017 Tamás Gulácsi -// -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package goracle - -// #cgo LDFLAGS: -ldl -lpthread -import "C" diff --git a/vendor/gopkg.in/goracle.v2/go.mod b/vendor/gopkg.in/goracle.v2/go.mod deleted file mode 100644 index dddc925735bd..000000000000 --- a/vendor/gopkg.in/goracle.v2/go.mod +++ /dev/null @@ -1,10 +0,0 @@ -module gopkg.in/goracle.v2 - -require ( - github.com/go-kit/kit v0.8.0 - github.com/go-logfmt/logfmt v0.4.0 // indirect - github.com/go-stack/stack v1.8.0 // indirect - github.com/google/go-cmp v0.2.0 - github.com/pkg/errors v0.8.0 - golang.org/x/sync v0.0.0-20181108010431-42b317875d0f -) diff --git a/vendor/gopkg.in/goracle.v2/go.sum b/vendor/gopkg.in/goracle.v2/go.sum deleted file mode 100644 index 3bcd4a1c1177..000000000000 --- a/vendor/gopkg.in/goracle.v2/go.sum +++ /dev/null @@ -1,14 +0,0 @@ -github.com/go-kit/kit v0.8.0 h1:Wz+5lgoB0kkuqLEc6NVmwRknTKP6dTGbSqvhZtBI/j0= -github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= -github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80nA= -github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= -github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk= -github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= -github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 h1:T+h1c/A9Gawja4Y9mFVWj2vyii2bbUNDw3kt9VxK2EY= -github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= -github.com/pkg/errors v0.8.0 h1:WdK/asTD0HN+q6hsWO3/vpuAkAr+tw6aNJNDFFf0+qw= -github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= diff --git a/vendor/gopkg.in/goracle.v2/stmt_go09.go b/vendor/gopkg.in/goracle.v2/stmt_go09.go deleted file mode 100644 index 6c731f8ed649..000000000000 --- a/vendor/gopkg.in/goracle.v2/stmt_go09.go +++ /dev/null @@ -1,52 +0,0 @@ -// +build !go1.10 - -// Copyright 2017 Tamás Gulácsi -// -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package goracle - -/* -#include -#include "dpiImpl.h" -*/ -import "C" -import ( - "bytes" - "sync" - "unsafe" -) - -const go10 = false - -func dpiSetFromString(dv *C.dpiVar, pos C.uint32_t, x string) { - b := []byte(x) - C.dpiVar_setFromBytes(dv, pos, (*C.char)(unsafe.Pointer(&b[0])), C.uint32_t(len(b))) -} - -var stringBuilders = stringBuilderPool{ - p: &sync.Pool{New: func() interface{} { return bytes.NewBuffer(make([]byte, 0, 1024)) }}, -} - -type stringBuilderPool struct { - p *sync.Pool -} - -func (sb stringBuilderPool) Get() *bytes.Buffer { - return sb.p.Get().(*bytes.Buffer) -} -func (sb *stringBuilderPool) Put(b *bytes.Buffer) { - b.Reset() - sb.p.Put(b) -} diff --git a/vendor/gopkg.in/goracle.v2/stmt_go10.go b/vendor/gopkg.in/goracle.v2/stmt_go10.go deleted file mode 100644 index 5bc1da80073b..000000000000 --- a/vendor/gopkg.in/goracle.v2/stmt_go10.go +++ /dev/null @@ -1,78 +0,0 @@ -// +build go1.10 - -// Copyright 2017 Tamás Gulácsi -// -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package goracle - -/* -#include -#include "dpiImpl.h" - -void goracle_setFromString(dpiVar *dv, uint32_t pos, const _GoString_ value) { - uint32_t length; - length = _GoStringLen(value); - if( length == 0 ) { - return; - } - dpiVar_setFromBytes(dv, pos, _GoStringPtr(value), length); -} -*/ -import "C" -import ( - //"context" - "strings" - "sync" -) - -const go10 = true - -func dpiSetFromString(dv *C.dpiVar, pos C.uint32_t, x string) { - C.goracle_setFromString(dv, pos, x) -} - -var stringBuilders = stringBuilderPool{ - p: &sync.Pool{New: func() interface{} { return &strings.Builder{} }}, -} - -type stringBuilderPool struct { - p *sync.Pool -} - -func (sb stringBuilderPool) Get() *strings.Builder { - return sb.p.Get().(*strings.Builder) -} -func (sb *stringBuilderPool) Put(b *strings.Builder) { - b.Reset() - sb.p.Put(b) -} - -/* -// ResetSession is called while a connection is in the connection -// pool. No queries will run on this connection until this method returns. -// -// If the connection is bad this should return driver.ErrBadConn to prevent -// the connection from being returned to the connection pool. Any other -// error will be discarded. -func (c *conn) ResetSession(ctx context.Context) error { - if Log != nil { - Log("msg", "ResetSession", "conn", c.dpiConn) - } - //subCtx, cancel := context.WithTimeout(ctx, 30*time.Second) - //err := c.Ping(subCtx) - //cancel() - return c.Ping(ctx) -} -*/ diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/LICENSE b/vendor/gopkg.in/vmihailenco/msgpack.v2/LICENSE new file mode 100644 index 000000000000..bafc76b689cc --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2013 The msgpack for Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/Makefile b/vendor/gopkg.in/vmihailenco/msgpack.v2/Makefile new file mode 100644 index 000000000000..b62ae6a46e30 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/Makefile @@ -0,0 +1,5 @@ +all: + go test ./... + env GOOS=linux GOARCH=386 go test ./... + go test ./... -short -race + go vet diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/README.md b/vendor/gopkg.in/vmihailenco/msgpack.v2/README.md new file mode 100644 index 000000000000..12985b29d06d --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/README.md @@ -0,0 +1,69 @@ +# MessagePack encoding for Golang + +[![Build Status](https://travis-ci.org/vmihailenco/msgpack.svg?branch=v2)](https://travis-ci.org/vmihailenco/msgpack) +[![GoDoc](https://godoc.org/github.com/vmihailenco/msgpack?status.svg)](https://godoc.org/github.com/vmihailenco/msgpack) + +Supports: +- Primitives, arrays, maps, structs, time.Time and interface{}. +- Appengine *datastore.Key and datastore.Cursor. +- [CustomEncoder](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-CustomEncoder)/CustomDecoder interfaces for custom encoding. +- [Extensions](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-RegisterExt) to encode type information. +- Renaming fields via `msgpack:"my_field_name"`. +- Inlining struct fields via `msgpack:",inline"`. +- Omitting empty fields via `msgpack:",omitempty"`. +- [Map keys sorting](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#Encoder.SortMapKeys). +- Encoding/decoding all [structs as arrays](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#Encoder.StructAsArray) or [individual structs](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-Marshal--AsArray). +- Simple but very fast and efficient [queries](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#example-Decoder-Query). + +API docs: https://godoc.org/gopkg.in/vmihailenco/msgpack.v2. +Examples: https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#pkg-examples. + +## Installation + +Install: + +```shell +go get gopkg.in/vmihailenco/msgpack.v2 +``` + +## Quickstart + +```go +func ExampleMarshal() { + type Item struct { + Foo string + } + + b, err := msgpack.Marshal(&Item{Foo: "bar"}) + if err != nil { + panic(err) + } + + var item Item + err = msgpack.Unmarshal(b, &item) + if err != nil { + panic(err) + } + fmt.Println(item.Foo) + // Output: bar +} +``` + +## Benchmark + +``` +BenchmarkStructVmihailencoMsgpack-4 200000 12814 ns/op 2128 B/op 26 allocs/op +BenchmarkStructUgorjiGoMsgpack-4 100000 17678 ns/op 3616 B/op 70 allocs/op +BenchmarkStructUgorjiGoCodec-4 100000 19053 ns/op 7346 B/op 23 allocs/op +BenchmarkStructJSON-4 20000 69438 ns/op 7864 B/op 26 allocs/op +BenchmarkStructGOB-4 10000 104331 ns/op 14664 B/op 278 allocs/op +``` + +## Howto + +Please go through [examples](https://godoc.org/gopkg.in/vmihailenco/msgpack.v2#pkg-examples) to get an idea how to use this package. + +## See also + +- [Golang PostgreSQL ORM](https://github.com/go-pg/pg) +- [Golang message task queue](https://github.com/go-msgqueue/msgqueue) diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/appengine.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/appengine.go new file mode 100644 index 000000000000..43614247fc54 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/appengine.go @@ -0,0 +1,69 @@ +// +build appengine + +package msgpack + +import ( + "reflect" + + ds "google.golang.org/appengine/datastore" +) + +var ( + keyPtrType = reflect.TypeOf((*ds.Key)(nil)) + cursorType = reflect.TypeOf((*ds.Cursor)(nil)).Elem() +) + +func init() { + Register(keyPtrType, encodeDatastoreKeyValue, decodeDatastoreKeyValue) + Register(cursorType, encodeDatastoreCursorValue, decodeDatastoreCursorValue) +} + +func EncodeDatastoreKey(e *Encoder, key *ds.Key) error { + if key == nil { + return e.EncodeNil() + } + return e.EncodeString(key.Encode()) +} + +func encodeDatastoreKeyValue(e *Encoder, v reflect.Value) error { + key := v.Interface().(*ds.Key) + return EncodeDatastoreKey(e, key) +} + +func DecodeDatastoreKey(d *Decoder) (*ds.Key, error) { + v, err := d.DecodeString() + if err != nil { + return nil, err + } + if v == "" { + return nil, nil + } + return ds.DecodeKey(v) +} + +func decodeDatastoreKeyValue(d *Decoder, v reflect.Value) error { + key, err := DecodeDatastoreKey(d) + if err != nil { + return err + } + v.Set(reflect.ValueOf(key)) + return nil +} + +func encodeDatastoreCursorValue(e *Encoder, v reflect.Value) error { + cursor := v.Interface().(ds.Cursor) + return e.Encode(cursor.String()) +} + +func decodeDatastoreCursorValue(d *Decoder, v reflect.Value) error { + s, err := d.DecodeString() + if err != nil { + return err + } + cursor, err := ds.DecodeCursor(s) + if err != nil { + return err + } + v.Set(reflect.ValueOf(cursor)) + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/codes/codes.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/codes/codes.go new file mode 100644 index 000000000000..b2b12074ee0d --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/codes/codes.go @@ -0,0 +1,76 @@ +package codes + +var ( + PosFixedNumHigh byte = 0x7f + NegFixedNumLow byte = 0xe0 + + Nil byte = 0xc0 + + False byte = 0xc2 + True byte = 0xc3 + + Float byte = 0xca + Double byte = 0xcb + + Uint8 byte = 0xcc + Uint16 byte = 0xcd + Uint32 byte = 0xce + Uint64 byte = 0xcf + + Int8 byte = 0xd0 + Int16 byte = 0xd1 + Int32 byte = 0xd2 + Int64 byte = 0xd3 + + FixedStrLow byte = 0xa0 + FixedStrHigh byte = 0xbf + FixedStrMask byte = 0x1f + Str8 byte = 0xd9 + Str16 byte = 0xda + Str32 byte = 0xdb + + Bin8 byte = 0xc4 + Bin16 byte = 0xc5 + Bin32 byte = 0xc6 + + FixedArrayLow byte = 0x90 + FixedArrayHigh byte = 0x9f + FixedArrayMask byte = 0xf + Array16 byte = 0xdc + Array32 byte = 0xdd + + FixedMapLow byte = 0x80 + FixedMapHigh byte = 0x8f + FixedMapMask byte = 0xf + Map16 byte = 0xde + Map32 byte = 0xdf + + FixExt1 byte = 0xd4 + FixExt2 byte = 0xd5 + FixExt4 byte = 0xd6 + FixExt8 byte = 0xd7 + FixExt16 byte = 0xd8 + Ext8 byte = 0xc7 + Ext16 byte = 0xc8 + Ext32 byte = 0xc9 +) + +func IsFixedNum(c byte) bool { + return c <= PosFixedNumHigh || c >= NegFixedNumLow +} + +func IsFixedMap(c byte) bool { + return c >= FixedMapLow && c <= FixedMapHigh +} + +func IsFixedArray(c byte) bool { + return c >= FixedArrayLow && c <= FixedArrayHigh +} + +func IsFixedString(c byte) bool { + return c >= FixedStrLow && c <= FixedStrHigh +} + +func IsExt(c byte) bool { + return (c >= FixExt1 && c <= FixExt16) || (c >= Ext8 && c <= Ext32) +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/decode.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode.go new file mode 100644 index 000000000000..7e11b40d4152 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode.go @@ -0,0 +1,425 @@ +package msgpack + +import ( + "bufio" + "bytes" + "errors" + "fmt" + "io" + "reflect" + "time" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +const bytesAllocLimit = 1024 * 1024 // 1mb + +type bufReader interface { + Read([]byte) (int, error) + ReadByte() (byte, error) + UnreadByte() error +} + +func newBufReader(r io.Reader) bufReader { + if br, ok := r.(bufReader); ok { + return br + } + return bufio.NewReader(r) +} + +func makeBuffer() []byte { + return make([]byte, 0, 64) +} + +// Unmarshal decodes the MessagePack-encoded data and stores the result +// in the value pointed to by v. +func Unmarshal(data []byte, v ...interface{}) error { + return NewDecoder(bytes.NewReader(data)).Decode(v...) +} + +type Decoder struct { + DecodeMapFunc func(*Decoder) (interface{}, error) + + r bufReader + buf []byte + + extLen int + rec []byte // accumulates read data if not nil +} + +func NewDecoder(r io.Reader) *Decoder { + return &Decoder{ + DecodeMapFunc: decodeMap, + + r: newBufReader(r), + buf: makeBuffer(), + } +} + +func (d *Decoder) Reset(r io.Reader) error { + d.r = newBufReader(r) + return nil +} + +func (d *Decoder) Decode(v ...interface{}) error { + for _, vv := range v { + if err := d.decode(vv); err != nil { + return err + } + } + return nil +} + +func (d *Decoder) decode(dst interface{}) error { + var err error + switch v := dst.(type) { + case *string: + if v != nil { + *v, err = d.DecodeString() + return err + } + case *[]byte: + if v != nil { + return d.decodeBytesPtr(v) + } + case *int: + if v != nil { + *v, err = d.DecodeInt() + return err + } + case *int8: + if v != nil { + *v, err = d.DecodeInt8() + return err + } + case *int16: + if v != nil { + *v, err = d.DecodeInt16() + return err + } + case *int32: + if v != nil { + *v, err = d.DecodeInt32() + return err + } + case *int64: + if v != nil { + *v, err = d.DecodeInt64() + return err + } + case *uint: + if v != nil { + *v, err = d.DecodeUint() + return err + } + case *uint8: + if v != nil { + *v, err = d.DecodeUint8() + return err + } + case *uint16: + if v != nil { + *v, err = d.DecodeUint16() + return err + } + case *uint32: + if v != nil { + *v, err = d.DecodeUint32() + return err + } + case *uint64: + if v != nil { + *v, err = d.DecodeUint64() + return err + } + case *bool: + if v != nil { + *v, err = d.DecodeBool() + return err + } + case *float32: + if v != nil { + *v, err = d.DecodeFloat32() + return err + } + case *float64: + if v != nil { + *v, err = d.DecodeFloat64() + return err + } + case *[]string: + return d.decodeStringSlicePtr(v) + case *map[string]string: + return d.decodeMapStringStringPtr(v) + case *map[string]interface{}: + return d.decodeMapStringInterfacePtr(v) + case *time.Duration: + if v != nil { + vv, err := d.DecodeInt64() + *v = time.Duration(vv) + return err + } + case *time.Time: + if v != nil { + *v, err = d.DecodeTime() + return err + } + } + + v := reflect.ValueOf(dst) + if !v.IsValid() { + return errors.New("msgpack: Decode(nil)") + } + if v.Kind() != reflect.Ptr { + return fmt.Errorf("msgpack: Decode(nonsettable %T)", dst) + } + v = v.Elem() + if !v.IsValid() { + return fmt.Errorf("msgpack: Decode(nonsettable %T)", dst) + } + return d.DecodeValue(v) +} + +func (d *Decoder) DecodeValue(v reflect.Value) error { + decode := getDecoder(v.Type()) + return decode(d, v) +} + +func (d *Decoder) DecodeNil() error { + c, err := d.readByte() + if err != nil { + return err + } + if c != codes.Nil { + return fmt.Errorf("msgpack: invalid code %x decoding nil", c) + } + return nil +} + +func (d *Decoder) DecodeBool() (bool, error) { + c, err := d.readByte() + if err != nil { + return false, err + } + return d.bool(c) +} + +func (d *Decoder) bool(c byte) (bool, error) { + if c == codes.False { + return false, nil + } + if c == codes.True { + return true, nil + } + return false, fmt.Errorf("msgpack: invalid code %x decoding bool", c) +} + +func (d *Decoder) interfaceValue(v reflect.Value) error { + vv, err := d.DecodeInterface() + if err != nil { + return err + } + if vv != nil { + if v.Type() == errorType { + if vv, ok := vv.(string); ok { + v.Set(reflect.ValueOf(errors.New(vv))) + return nil + } + } + + v.Set(reflect.ValueOf(vv)) + } + return nil +} + +// DecodeInterface decodes value into interface. Possible value types are: +// - nil, +// - bool, +// - int64 for negative numbers, +// - uint64 for positive numbers, +// - float32 and float64, +// - string, +// - slices of any of the above, +// - maps of any of the above. +func (d *Decoder) DecodeInterface() (interface{}, error) { + c, err := d.readByte() + if err != nil { + return nil, err + } + + if codes.IsFixedNum(c) { + if int8(c) < 0 { + return d.int(c) + } + return d.uint(c) + } + if codes.IsFixedMap(c) { + d.r.UnreadByte() + return d.DecodeMap() + } + if codes.IsFixedArray(c) { + return d.decodeSlice(c) + } + if codes.IsFixedString(c) { + return d.string(c) + } + + switch c { + case codes.Nil: + return nil, nil + case codes.False, codes.True: + return d.bool(c) + case codes.Float: + return d.float32(c) + case codes.Double: + return d.float64(c) + case codes.Uint8, codes.Uint16, codes.Uint32, codes.Uint64: + return d.uint(c) + case codes.Int8, codes.Int16, codes.Int32, codes.Int64: + return d.int(c) + case codes.Bin8, codes.Bin16, codes.Bin32: + return d.bytes(c, nil) + case codes.Str8, codes.Str16, codes.Str32: + return d.string(c) + case codes.Array16, codes.Array32: + return d.decodeSlice(c) + case codes.Map16, codes.Map32: + d.r.UnreadByte() + return d.DecodeMap() + case codes.FixExt1, codes.FixExt2, codes.FixExt4, codes.FixExt8, codes.FixExt16, + codes.Ext8, codes.Ext16, codes.Ext32: + return d.ext(c) + } + + return 0, fmt.Errorf("msgpack: unknown code %x decoding interface{}", c) +} + +// Skip skips next value. +func (d *Decoder) Skip() error { + c, err := d.readByte() + if err != nil { + return err + } + + if codes.IsFixedNum(c) { + return nil + } else if codes.IsFixedMap(c) { + return d.skipMap(c) + } else if codes.IsFixedArray(c) { + return d.skipSlice(c) + } else if codes.IsFixedString(c) { + return d.skipBytes(c) + } + + switch c { + case codes.Nil, codes.False, codes.True: + return nil + case codes.Uint8, codes.Int8: + return d.skipN(1) + case codes.Uint16, codes.Int16: + return d.skipN(2) + case codes.Uint32, codes.Int32, codes.Float: + return d.skipN(4) + case codes.Uint64, codes.Int64, codes.Double: + return d.skipN(8) + case codes.Bin8, codes.Bin16, codes.Bin32: + return d.skipBytes(c) + case codes.Str8, codes.Str16, codes.Str32: + return d.skipBytes(c) + case codes.Array16, codes.Array32: + return d.skipSlice(c) + case codes.Map16, codes.Map32: + return d.skipMap(c) + case codes.FixExt1, codes.FixExt2, codes.FixExt4, codes.FixExt8, codes.FixExt16, codes.Ext8, codes.Ext16, codes.Ext32: + return d.skipExt(c) + } + + return fmt.Errorf("msgpack: unknown code %x", c) +} + +// peekCode returns next MessagePack code. See +// https://github.com/msgpack/msgpack/blob/master/spec.md#formats for details. +func (d *Decoder) PeekCode() (code byte, err error) { + code, err = d.r.ReadByte() + if err != nil { + return 0, err + } + return code, d.r.UnreadByte() +} + +func (d *Decoder) hasNilCode() bool { + code, err := d.PeekCode() + return err == nil && code == codes.Nil +} + +func (d *Decoder) readByte() (byte, error) { + c, err := d.r.ReadByte() + if err != nil { + return 0, err + } + if d.rec != nil { + d.rec = append(d.rec, c) + } + return c, nil +} + +func (d *Decoder) readFull(b []byte) error { + _, err := io.ReadFull(d.r, b) + if err != nil { + return err + } + if d.rec != nil { + d.rec = append(d.rec, b...) + } + return nil +} + +func (d *Decoder) readN(n int) ([]byte, error) { + buf, err := readN(d.r, d.buf, n) + if err != nil { + return nil, err + } + d.buf = buf + if d.rec != nil { + d.rec = append(d.rec, buf...) + } + return buf, nil +} + +func readN(r io.Reader, b []byte, n int) ([]byte, error) { + if n == 0 && b == nil { + return make([]byte, 0), nil + } + + if cap(b) >= n { + b = b[:n] + _, err := io.ReadFull(r, b) + return b, err + } + b = b[:cap(b)] + + pos := 0 + for len(b) < n { + diff := n - len(b) + if diff > bytesAllocLimit { + diff = bytesAllocLimit + } + b = append(b, make([]byte, diff)...) + + _, err := io.ReadFull(r, b[pos:]) + if err != nil { + return nil, err + } + + pos = len(b) + } + + return b, nil +} + +func min(a, b int) int { + if a <= b { + return a + } + return b +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_map.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_map.go new file mode 100644 index 000000000000..21c97978d5de --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_map.go @@ -0,0 +1,265 @@ +package msgpack + +import ( + "fmt" + "reflect" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +const mapElemsAllocLimit = 1e4 + +var mapStringStringPtrType = reflect.TypeOf((*map[string]string)(nil)) +var mapStringStringType = mapStringStringPtrType.Elem() + +var mapStringInterfacePtrType = reflect.TypeOf((*map[string]interface{})(nil)) +var mapStringInterfaceType = mapStringInterfacePtrType.Elem() + +func decodeMapValue(d *Decoder, v reflect.Value) error { + n, err := d.DecodeMapLen() + if err != nil { + return err + } + + typ := v.Type() + if n == -1 { + v.Set(reflect.Zero(typ)) + return nil + } + + if v.IsNil() { + v.Set(reflect.MakeMap(typ)) + } + keyType := typ.Key() + valueType := typ.Elem() + + for i := 0; i < n; i++ { + mk := reflect.New(keyType).Elem() + if err := d.DecodeValue(mk); err != nil { + return err + } + + mv := reflect.New(valueType).Elem() + if err := d.DecodeValue(mv); err != nil { + return err + } + + v.SetMapIndex(mk, mv) + } + + return nil +} +func decodeMap(d *Decoder) (interface{}, error) { + n, err := d.DecodeMapLen() + if err != nil { + return nil, err + } + if n == -1 { + return nil, nil + } + + m := make(map[interface{}]interface{}, min(n, mapElemsAllocLimit)) + for i := 0; i < n; i++ { + mk, err := d.DecodeInterface() + if err != nil { + return nil, err + } + mv, err := d.DecodeInterface() + if err != nil { + return nil, err + } + m[mk] = mv + } + return m, nil +} + +func (d *Decoder) DecodeMapLen() (int, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + + if codes.IsExt(c) { + if err = d.skipExtHeader(c); err != nil { + return 0, err + } + + c, err = d.readByte() + if err != nil { + return 0, err + } + } + + return d.mapLen(c) +} + +func (d *Decoder) mapLen(c byte) (int, error) { + if c == codes.Nil { + return -1, nil + } + if c >= codes.FixedMapLow && c <= codes.FixedMapHigh { + return int(c & codes.FixedMapMask), nil + } + if c == codes.Map16 { + n, err := d.uint16() + return int(n), err + } + if c == codes.Map32 { + n, err := d.uint32() + return int(n), err + } + return 0, fmt.Errorf("msgpack: invalid code %x decoding map length", c) +} + +func decodeMapStringStringValue(d *Decoder, v reflect.Value) error { + mptr := v.Addr().Convert(mapStringStringPtrType).Interface().(*map[string]string) + return d.decodeMapStringStringPtr(mptr) +} + +func (d *Decoder) decodeMapStringStringPtr(ptr *map[string]string) error { + n, err := d.DecodeMapLen() + if err != nil { + return err + } + if n == -1 { + *ptr = nil + return nil + } + + m := *ptr + if m == nil { + *ptr = make(map[string]string, min(n, mapElemsAllocLimit)) + m = *ptr + } + + for i := 0; i < n; i++ { + mk, err := d.DecodeString() + if err != nil { + return err + } + mv, err := d.DecodeString() + if err != nil { + return err + } + m[mk] = mv + } + + return nil +} + +func decodeMapStringInterfaceValue(d *Decoder, v reflect.Value) error { + ptr := v.Addr().Convert(mapStringInterfacePtrType).Interface().(*map[string]interface{}) + return d.decodeMapStringInterfacePtr(ptr) +} + +func (d *Decoder) decodeMapStringInterfacePtr(ptr *map[string]interface{}) error { + n, err := d.DecodeMapLen() + if err != nil { + return err + } + if n == -1 { + *ptr = nil + return nil + } + + m := *ptr + if m == nil { + *ptr = make(map[string]interface{}, min(n, mapElemsAllocLimit)) + m = *ptr + } + + for i := 0; i < n; i++ { + mk, err := d.DecodeString() + if err != nil { + return err + } + mv, err := d.DecodeInterface() + if err != nil { + return err + } + m[mk] = mv + } + + return nil +} + +func (d *Decoder) DecodeMap() (interface{}, error) { + return d.DecodeMapFunc(d) +} + +func (d *Decoder) skipMap(c byte) error { + n, err := d.mapLen(c) + if err != nil { + return err + } + for i := 0; i < n; i++ { + if err := d.Skip(); err != nil { + return err + } + if err := d.Skip(); err != nil { + return err + } + } + return nil +} + +func decodeStructValue(d *Decoder, strct reflect.Value) error { + c, err := d.readByte() + if err != nil { + return err + } + + var isArray bool + + n, err := d.mapLen(c) + if err != nil { + var err2 error + n, err2 = d.arrayLen(c) + if err2 != nil { + return err + } + isArray = true + } + if n == -1 { + strct.Set(reflect.Zero(strct.Type())) + return nil + } + + fields := structs.Fields(strct.Type()) + + if isArray { + for i, f := range fields.List { + if i >= n { + break + } + if err := f.DecodeValue(d, strct); err != nil { + return err + } + } + // Skip extra values. + for i := len(fields.List); i < n; i++ { + if err := d.Skip(); err != nil { + return err + } + } + return nil + } + + for i := 0; i < n; i++ { + name, err := d.DecodeString() + if err != nil { + return err + } + if f := fields.Table[name]; f != nil { + if err := f.DecodeValue(d, strct); err != nil { + return err + } + } else { + if err := d.Skip(); err != nil { + return err + } + } + } + + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_number.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_number.go new file mode 100644 index 000000000000..587634161c13 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_number.go @@ -0,0 +1,270 @@ +package msgpack + +import ( + "fmt" + "math" + "reflect" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +func (d *Decoder) skipN(n int) error { + _, err := d.readN(n) + return err +} + +func (d *Decoder) uint8() (uint8, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return uint8(c), nil +} + +func (d *Decoder) uint16() (uint16, error) { + b, err := d.readN(2) + if err != nil { + return 0, err + } + return (uint16(b[0]) << 8) | uint16(b[1]), nil +} + +func (d *Decoder) uint32() (uint32, error) { + b, err := d.readN(4) + if err != nil { + return 0, err + } + n := (uint32(b[0]) << 24) | + (uint32(b[1]) << 16) | + (uint32(b[2]) << 8) | + uint32(b[3]) + return n, nil +} + +func (d *Decoder) uint64() (uint64, error) { + b, err := d.readN(8) + if err != nil { + return 0, err + } + n := (uint64(b[0]) << 56) | + (uint64(b[1]) << 48) | + (uint64(b[2]) << 40) | + (uint64(b[3]) << 32) | + (uint64(b[4]) << 24) | + (uint64(b[5]) << 16) | + (uint64(b[6]) << 8) | + uint64(b[7]) + return n, nil +} + +func (d *Decoder) DecodeUint64() (uint64, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.uint(c) +} + +func (d *Decoder) uint(c byte) (uint64, error) { + if c == codes.Nil { + return 0, nil + } + if codes.IsFixedNum(c) { + return uint64(int8(c)), nil + } + switch c { + case codes.Uint8: + n, err := d.uint8() + return uint64(n), err + case codes.Int8: + n, err := d.uint8() + return uint64(int8(n)), err + case codes.Uint16: + n, err := d.uint16() + return uint64(n), err + case codes.Int16: + n, err := d.uint16() + return uint64(int16(n)), err + case codes.Uint32: + n, err := d.uint32() + return uint64(n), err + case codes.Int32: + n, err := d.uint32() + return uint64(int32(n)), err + case codes.Uint64, codes.Int64: + return d.uint64() + } + return 0, fmt.Errorf("msgpack: invalid code %x decoding uint64", c) +} + +func (d *Decoder) DecodeInt64() (int64, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.int(c) +} + +func (d *Decoder) int(c byte) (int64, error) { + if c == codes.Nil { + return 0, nil + } + if codes.IsFixedNum(c) { + return int64(int8(c)), nil + } + switch c { + case codes.Uint8: + n, err := d.uint8() + return int64(n), err + case codes.Int8: + n, err := d.uint8() + return int64(int8(n)), err + case codes.Uint16: + n, err := d.uint16() + return int64(n), err + case codes.Int16: + n, err := d.uint16() + return int64(int16(n)), err + case codes.Uint32: + n, err := d.uint32() + return int64(n), err + case codes.Int32: + n, err := d.uint32() + return int64(int32(n)), err + case codes.Uint64, codes.Int64: + n, err := d.uint64() + return int64(n), err + } + return 0, fmt.Errorf("msgpack: invalid code %x decoding int64", c) +} + +func (d *Decoder) DecodeFloat32() (float32, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.float32(c) +} + +func (d *Decoder) float32(c byte) (float32, error) { + if c == codes.Float { + n, err := d.uint32() + if err != nil { + return 0, err + } + return math.Float32frombits(n), nil + } + + n, err := d.int(c) + if err != nil { + return 0, fmt.Errorf("msgpack: invalid code %x decoding float32", c) + } + return float32(n), nil +} + +func (d *Decoder) DecodeFloat64() (float64, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.float64(c) +} + +func (d *Decoder) float64(c byte) (float64, error) { + switch c { + case codes.Float: + n, err := d.float32(c) + if err != nil { + return 0, err + } + return float64(n), nil + case codes.Double: + n, err := d.uint64() + if err != nil { + return 0, err + } + return math.Float64frombits(n), nil + } + + n, err := d.int(c) + if err != nil { + return 0, fmt.Errorf("msgpack: invalid code %x decoding float32", c) + } + return float64(n), nil +} + +func (d *Decoder) DecodeUint() (uint, error) { + n, err := d.DecodeUint64() + return uint(n), err +} + +func (d *Decoder) DecodeUint8() (uint8, error) { + n, err := d.DecodeUint64() + return uint8(n), err +} + +func (d *Decoder) DecodeUint16() (uint16, error) { + n, err := d.DecodeUint64() + return uint16(n), err +} + +func (d *Decoder) DecodeUint32() (uint32, error) { + n, err := d.DecodeUint64() + return uint32(n), err +} + +func (d *Decoder) DecodeInt() (int, error) { + n, err := d.DecodeInt64() + return int(n), err +} + +func (d *Decoder) DecodeInt8() (int8, error) { + n, err := d.DecodeInt64() + return int8(n), err +} + +func (d *Decoder) DecodeInt16() (int16, error) { + n, err := d.DecodeInt64() + return int16(n), err +} + +func (d *Decoder) DecodeInt32() (int32, error) { + n, err := d.DecodeInt64() + return int32(n), err +} + +func decodeFloat32Value(d *Decoder, v reflect.Value) error { + f, err := d.DecodeFloat32() + if err != nil { + return err + } + v.SetFloat(float64(f)) + return nil +} + +func decodeFloat64Value(d *Decoder, v reflect.Value) error { + f, err := d.DecodeFloat64() + if err != nil { + return err + } + v.SetFloat(f) + return nil +} + +func decodeInt64Value(d *Decoder, v reflect.Value) error { + n, err := d.DecodeInt64() + if err != nil { + return err + } + v.SetInt(n) + return nil +} + +func decodeUint64Value(d *Decoder, v reflect.Value) error { + n, err := d.DecodeUint64() + if err != nil { + return err + } + v.SetUint(n) + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_query.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_query.go new file mode 100644 index 000000000000..18d9d89b9381 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_query.go @@ -0,0 +1,158 @@ +package msgpack + +import ( + "fmt" + "strconv" + "strings" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +type queryResult struct { + query string + key string + hasAsterisk bool + + values []interface{} +} + +func (q *queryResult) nextKey() { + ind := strings.IndexByte(q.query, '.') + if ind == -1 { + q.key = q.query + q.query = "" + return + } + q.key = q.query[:ind] + q.query = q.query[ind+1:] +} + +// Query extracts data specified by the query from the msgpack stream skipping +// any other data. Query consists of map keys and array indexes separated with dot, +// e.g. key1.0.key2. +func (d *Decoder) Query(query string) ([]interface{}, error) { + res := queryResult{ + query: query, + } + if err := d.query(&res); err != nil { + return nil, err + } + return res.values, nil +} + +func (d *Decoder) query(q *queryResult) error { + q.nextKey() + if q.key == "" { + v, err := d.DecodeInterface() + if err != nil { + return err + } + q.values = append(q.values, v) + return nil + } + + code, err := d.PeekCode() + if err != nil { + return err + } + + switch { + case code == codes.Map16 || code == codes.Map32 || codes.IsFixedMap(code): + err = d.queryMapKey(q) + case code == codes.Array16 || code == codes.Array32 || codes.IsFixedArray(code): + err = d.queryArrayIndex(q) + default: + err = fmt.Errorf("msgpack: unsupported code=%x decoding key=%q", code, q.key) + } + return err +} + +func (d *Decoder) queryMapKey(q *queryResult) error { + n, err := d.DecodeMapLen() + if err != nil { + return err + } + if n == -1 { + return nil + } + + for i := 0; i < n; i++ { + k, err := d.bytesNoCopy() + if err != nil { + return err + } + + if string(k) == q.key { + if err := d.query(q); err != nil { + return err + } + if q.hasAsterisk { + return d.skipNext((n - i - 1) * 2) + } + return nil + } + + if err := d.Skip(); err != nil { + return err + } + } + + return nil +} + +func (d *Decoder) queryArrayIndex(q *queryResult) error { + n, err := d.DecodeSliceLen() + if err != nil { + return err + } + if n == -1 { + return nil + } + + if q.key == "*" { + q.hasAsterisk = true + + query := q.query + for i := 0; i < n; i++ { + q.query = query + if err := d.query(q); err != nil { + return err + } + } + + q.hasAsterisk = false + return nil + } + + ind, err := strconv.Atoi(q.key) + if err != nil { + return err + } + + for i := 0; i < n; i++ { + if i == ind { + if err := d.query(q); err != nil { + return err + } + if q.hasAsterisk { + return d.skipNext(n - i - 1) + } + return nil + } + + if err := d.Skip(); err != nil { + return err + } + } + + return nil +} + +func (d *Decoder) skipNext(n int) error { + for i := 0; i < n; i++ { + if err := d.Skip(); err != nil { + return err + } + } + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_slice.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_slice.go new file mode 100644 index 000000000000..61e3da2bd447 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_slice.go @@ -0,0 +1,197 @@ +package msgpack + +import ( + "fmt" + "reflect" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +const sliceElemsAllocLimit = 1e4 + +var sliceStringPtrType = reflect.TypeOf((*[]string)(nil)) + +// Deprecated. Use DecodeArrayLen instead. +func (d *Decoder) DecodeSliceLen() (int, error) { + return d.DecodeArrayLen() +} + +func (d *Decoder) DecodeArrayLen() (int, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.arrayLen(c) +} + +func (d *Decoder) arrayLen(c byte) (int, error) { + if c == codes.Nil { + return -1, nil + } else if c >= codes.FixedArrayLow && c <= codes.FixedArrayHigh { + return int(c & codes.FixedArrayMask), nil + } + switch c { + case codes.Array16: + n, err := d.uint16() + return int(n), err + case codes.Array32: + n, err := d.uint32() + return int(n), err + } + return 0, fmt.Errorf("msgpack: invalid code %x decoding array length", c) +} + +func decodeStringSliceValue(d *Decoder, v reflect.Value) error { + ptr := v.Addr().Convert(sliceStringPtrType).Interface().(*[]string) + return d.decodeStringSlicePtr(ptr) +} + +func (d *Decoder) decodeStringSlicePtr(ptr *[]string) error { + n, err := d.DecodeArrayLen() + if err != nil { + return err + } + if n == -1 { + return nil + } + + ss := setStringsCap(*ptr, n) + for i := 0; i < n; i++ { + s, err := d.DecodeString() + if err != nil { + return err + } + ss = append(ss, s) + } + *ptr = ss + + return nil +} + +func setStringsCap(s []string, n int) []string { + if n > sliceElemsAllocLimit { + n = sliceElemsAllocLimit + } + + if s == nil { + return make([]string, 0, n) + } + + if cap(s) >= n { + return s[:0] + } + + s = s[:cap(s)] + s = append(s, make([]string, n-len(s))...) + return s[:0] +} + +func decodeSliceValue(d *Decoder, v reflect.Value) error { + n, err := d.DecodeArrayLen() + if err != nil { + return err + } + + if n == -1 { + v.Set(reflect.Zero(v.Type())) + return nil + } + if n == 0 && v.IsNil() { + v.Set(reflect.MakeSlice(v.Type(), 0, 0)) + return nil + } + + if v.Cap() >= n { + v.Set(v.Slice(0, n)) + } else if v.Len() < v.Cap() { + v.Set(v.Slice(0, v.Cap())) + } + + for i := 0; i < n; i++ { + if i >= v.Len() { + v.Set(growSliceValue(v, n)) + } + sv := v.Index(i) + if err := d.DecodeValue(sv); err != nil { + return err + } + } + + return nil +} + +func growSliceValue(v reflect.Value, n int) reflect.Value { + diff := n - v.Len() + if diff > sliceElemsAllocLimit { + diff = sliceElemsAllocLimit + } + v = reflect.AppendSlice(v, reflect.MakeSlice(v.Type(), diff, diff)) + return v +} + +func decodeArrayValue(d *Decoder, v reflect.Value) error { + n, err := d.DecodeArrayLen() + if err != nil { + return err + } + + if n == -1 { + return nil + } + + if n > v.Len() { + return fmt.Errorf("%s len is %d, but msgpack has %d elements", v.Type(), v.Len(), n) + } + for i := 0; i < n; i++ { + sv := v.Index(i) + if err := d.DecodeValue(sv); err != nil { + return err + } + } + + return nil +} + +func (d *Decoder) DecodeSlice() ([]interface{}, error) { + c, err := d.readByte() + if err != nil { + return nil, err + } + return d.decodeSlice(c) +} + +func (d *Decoder) decodeSlice(c byte) ([]interface{}, error) { + n, err := d.arrayLen(c) + if err != nil { + return nil, err + } + if n == -1 { + return nil, nil + } + + s := make([]interface{}, 0, min(n, sliceElemsAllocLimit)) + for i := 0; i < n; i++ { + v, err := d.DecodeInterface() + if err != nil { + return nil, err + } + s = append(s, v) + } + + return s, nil +} + +func (d *Decoder) skipSlice(c byte) error { + n, err := d.arrayLen(c) + if err != nil { + return err + } + + for i := 0; i < n; i++ { + if err := d.Skip(); err != nil { + return err + } + } + + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_string.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_string.go new file mode 100644 index 000000000000..17c1b6118edf --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_string.go @@ -0,0 +1,168 @@ +package msgpack + +import ( + "fmt" + "reflect" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +func (d *Decoder) bytesLen(c byte) (int, error) { + if c == codes.Nil { + return -1, nil + } else if codes.IsFixedString(c) { + return int(c & codes.FixedStrMask), nil + } + switch c { + case codes.Str8, codes.Bin8: + n, err := d.uint8() + return int(n), err + case codes.Str16, codes.Bin16: + n, err := d.uint16() + return int(n), err + case codes.Str32, codes.Bin32: + n, err := d.uint32() + return int(n), err + } + return 0, fmt.Errorf("msgpack: invalid code %x decoding bytes length", c) +} + +func (d *Decoder) DecodeString() (string, error) { + c, err := d.readByte() + if err != nil { + return "", err + } + return d.string(c) +} + +func (d *Decoder) string(c byte) (string, error) { + n, err := d.bytesLen(c) + if err != nil { + return "", err + } + if n == -1 { + return "", nil + } + b, err := d.readN(n) + return string(b), err +} + +func decodeStringValue(d *Decoder, v reflect.Value) error { + s, err := d.DecodeString() + if err != nil { + return err + } + v.SetString(s) + return nil +} + +func (d *Decoder) DecodeBytesLen() (int, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.bytesLen(c) +} + +func (d *Decoder) DecodeBytes() ([]byte, error) { + c, err := d.readByte() + if err != nil { + return nil, err + } + return d.bytes(c, nil) +} + +func (d *Decoder) bytes(c byte, b []byte) ([]byte, error) { + n, err := d.bytesLen(c) + if err != nil { + return nil, err + } + if n == -1 { + return nil, nil + } + return readN(d.r, b, n) +} + +func (d *Decoder) bytesNoCopy() ([]byte, error) { + c, err := d.readByte() + if err != nil { + return nil, err + } + n, err := d.bytesLen(c) + if err != nil { + return nil, err + } + if n == -1 { + return nil, nil + } + return d.readN(n) +} + +func (d *Decoder) decodeBytesPtr(ptr *[]byte) error { + c, err := d.readByte() + if err != nil { + return err + } + return d.bytesPtr(c, ptr) +} + +func (d *Decoder) bytesPtr(c byte, ptr *[]byte) error { + n, err := d.bytesLen(c) + if err != nil { + return err + } + if n == -1 { + *ptr = nil + return nil + } + + *ptr, err = readN(d.r, *ptr, n) + return err +} + +func (d *Decoder) skipBytes(c byte) error { + n, err := d.bytesLen(c) + if err != nil { + return err + } + if n == -1 { + return nil + } + return d.skipN(n) +} + +func decodeBytesValue(d *Decoder, v reflect.Value) error { + c, err := d.readByte() + if err != nil { + return err + } + + b, err := d.bytes(c, v.Bytes()) + if err != nil { + return err + } + v.SetBytes(b) + + return nil +} + +func decodeByteArrayValue(d *Decoder, v reflect.Value) error { + c, err := d.readByte() + if err != nil { + return err + } + + n, err := d.bytesLen(c) + if err != nil { + return err + } + if n == -1 { + return nil + } + if n > v.Len() { + return fmt.Errorf("%s len is %d, but msgpack has %d elements", v.Type(), v.Len(), n) + } + + b := v.Slice(0, n).Bytes() + return d.readFull(b) +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_value.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_value.go new file mode 100644 index 000000000000..e239296238f8 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/decode_value.go @@ -0,0 +1,248 @@ +package msgpack + +import ( + "fmt" + "reflect" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +var interfaceType = reflect.TypeOf((*interface{})(nil)).Elem() +var stringType = reflect.TypeOf((*string)(nil)).Elem() + +var valueDecoders []decoderFunc + +func init() { + valueDecoders = []decoderFunc{ + reflect.Bool: decodeBoolValue, + reflect.Int: decodeInt64Value, + reflect.Int8: decodeInt64Value, + reflect.Int16: decodeInt64Value, + reflect.Int32: decodeInt64Value, + reflect.Int64: decodeInt64Value, + reflect.Uint: decodeUint64Value, + reflect.Uint8: decodeUint64Value, + reflect.Uint16: decodeUint64Value, + reflect.Uint32: decodeUint64Value, + reflect.Uint64: decodeUint64Value, + reflect.Float32: decodeFloat32Value, + reflect.Float64: decodeFloat64Value, + reflect.Complex64: decodeUnsupportedValue, + reflect.Complex128: decodeUnsupportedValue, + reflect.Array: decodeArrayValue, + reflect.Chan: decodeUnsupportedValue, + reflect.Func: decodeUnsupportedValue, + reflect.Interface: decodeInterfaceValue, + reflect.Map: decodeMapValue, + reflect.Ptr: decodeUnsupportedValue, + reflect.Slice: decodeSliceValue, + reflect.String: decodeStringValue, + reflect.Struct: decodeStructValue, + reflect.UnsafePointer: decodeUnsupportedValue, + } +} + +func getDecoder(typ reflect.Type) decoderFunc { + kind := typ.Kind() + + if decoder, ok := typDecMap[typ]; ok { + return decoder + } + + if typ.Implements(customDecoderType) { + return decodeCustomValue + } + if typ.Implements(unmarshalerType) { + return unmarshalValue + } + + // Addressable struct field value. + if kind != reflect.Ptr { + ptr := reflect.PtrTo(typ) + if ptr.Implements(customDecoderType) { + return decodeCustomValueAddr + } + if ptr.Implements(unmarshalerType) { + return unmarshalValueAddr + } + } + + switch kind { + case reflect.Ptr: + return ptrDecoderFunc(typ) + case reflect.Slice: + elem := typ.Elem() + switch elem.Kind() { + case reflect.Uint8: + return decodeBytesValue + } + switch elem { + case stringType: + return decodeStringSliceValue + } + case reflect.Array: + if typ.Elem().Kind() == reflect.Uint8 { + return decodeByteArrayValue + } + case reflect.Map: + if typ.Key() == stringType { + switch typ.Elem() { + case stringType: + return decodeMapStringStringValue + case interfaceType: + return decodeMapStringInterfaceValue + } + } + } + return valueDecoders[kind] +} + +func ptrDecoderFunc(typ reflect.Type) decoderFunc { + decoder := getDecoder(typ.Elem()) + return func(d *Decoder, v reflect.Value) error { + if d.hasNilCode() { + v.Set(reflect.Zero(v.Type())) + return d.DecodeNil() + } + if v.IsNil() { + if !v.CanSet() { + return fmt.Errorf("msgpack: Decode(nonsettable %T)", v.Interface()) + } + v.Set(reflect.New(v.Type().Elem())) + } + return decoder(d, v.Elem()) + } +} + +func decodeCustomValueAddr(d *Decoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Decode(nonsettable %T)", v.Interface()) + } + return decodeCustomValue(d, v.Addr()) +} + +func decodeCustomValue(d *Decoder, v reflect.Value) error { + c, err := d.PeekCode() + if err != nil { + return err + } + + if codes.IsExt(c) { + c, err = d.readByte() + if err != nil { + return err + } + + _, err = d.parseExtLen(c) + if err != nil { + return err + } + + _, err = d.readByte() + if err != nil { + return err + } + + c, err = d.PeekCode() + if err != nil { + return err + } + } + + if c == codes.Nil { + // TODO: set nil + return d.DecodeNil() + } + + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + + decoder := v.Interface().(CustomDecoder) + return decoder.DecodeMsgpack(d) +} + +func unmarshalValueAddr(d *Decoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Decode(nonsettable %T)", v.Interface()) + } + return unmarshalValue(d, v.Addr()) +} + +func unmarshalValue(d *Decoder, v reflect.Value) error { + c, err := d.PeekCode() + if err != nil { + return err + } + + if codes.IsExt(c) { + c, err = d.readByte() + if err != nil { + return err + } + + extLen, err := d.parseExtLen(c) + if err != nil { + return err + } + d.extLen = extLen + + _, err = d.readByte() + if err != nil { + return err + } + + c, err = d.PeekCode() + if err != nil { + return err + } + } + + if c == codes.Nil { + // TODO: set nil + return d.DecodeNil() + } + + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + + if d.extLen != 0 { + b, err := d.readN(d.extLen) + d.extLen = 0 + if err != nil { + return err + } + d.rec = b + } else { + d.rec = makeBuffer() + if err := d.Skip(); err != nil { + return err + } + } + + unmarshaler := v.Interface().(Unmarshaler) + err = unmarshaler.UnmarshalMsgpack(d.rec) + d.rec = nil + return err +} + +func decodeBoolValue(d *Decoder, v reflect.Value) error { + flag, err := d.DecodeBool() + if err != nil { + return err + } + v.SetBool(flag) + return nil +} + +func decodeInterfaceValue(d *Decoder, v reflect.Value) error { + if v.IsNil() { + return d.interfaceValue(v) + } + return d.DecodeValue(v.Elem()) +} + +func decodeUnsupportedValue(d *Decoder, v reflect.Value) error { + return fmt.Errorf("msgpack: Decode(unsupported %s)", v.Type()) +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/encode.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode.go new file mode 100644 index 000000000000..ad10f2e9a6f1 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode.go @@ -0,0 +1,144 @@ +package msgpack + +import ( + "bytes" + "io" + "reflect" + "time" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +type writer interface { + io.Writer + WriteByte(byte) error + WriteString(string) (int, error) +} + +type byteWriter struct { + io.Writer +} + +func (w byteWriter) WriteByte(b byte) error { + _, err := w.Write([]byte{b}) + return err +} + +func (w byteWriter) WriteString(s string) (int, error) { + return w.Write([]byte(s)) +} + +// Marshal returns the MessagePack encoding of v. +func Marshal(v ...interface{}) ([]byte, error) { + var buf bytes.Buffer + err := NewEncoder(&buf).Encode(v...) + return buf.Bytes(), err +} + +type Encoder struct { + w writer + buf []byte + + sortMapKeys bool + structAsArray bool +} + +func NewEncoder(w io.Writer) *Encoder { + bw, ok := w.(writer) + if !ok { + bw = byteWriter{Writer: w} + } + return &Encoder{ + w: bw, + buf: make([]byte, 9), + } +} + +// SortMapKeys causes the Encoder to encode map keys in increasing order. +// Supported map types are: +// - map[string]string +// - map[string]interface{} +func (e *Encoder) SortMapKeys(v bool) *Encoder { + e.sortMapKeys = v + return e +} + +// StructAsArray causes the Encoder to encode Go structs as MessagePack arrays. +func (e *Encoder) StructAsArray(v bool) *Encoder { + e.structAsArray = v + return e +} + +func (e *Encoder) Encode(v ...interface{}) error { + for _, vv := range v { + if err := e.encode(vv); err != nil { + return err + } + } + return nil +} + +func (e *Encoder) encode(v interface{}) error { + switch v := v.(type) { + case nil: + return e.EncodeNil() + case string: + return e.EncodeString(v) + case []byte: + return e.EncodeBytes(v) + case int: + return e.EncodeInt64(int64(v)) + case int64: + return e.EncodeInt64(v) + case uint: + return e.EncodeUint64(uint64(v)) + case uint64: + return e.EncodeUint64(v) + case bool: + return e.EncodeBool(v) + case float32: + return e.EncodeFloat32(v) + case float64: + return e.EncodeFloat64(v) + case time.Duration: + return e.EncodeInt64(int64(v)) + case time.Time: + return e.EncodeTime(v) + } + return e.EncodeValue(reflect.ValueOf(v)) +} + +func (e *Encoder) EncodeValue(v reflect.Value) error { + encode := getEncoder(v.Type()) + return encode(e, v) +} + +func (e *Encoder) EncodeNil() error { + return e.w.WriteByte(codes.Nil) +} + +func (e *Encoder) EncodeBool(value bool) error { + if value { + return e.w.WriteByte(codes.True) + } + return e.w.WriteByte(codes.False) +} + +func (e *Encoder) write(b []byte) error { + _, err := e.w.Write(b) + if err != nil { + return err + } + return nil +} + +func (e *Encoder) writeString(s string) error { + n, err := e.w.WriteString(s) + if err != nil { + return err + } + if n < len(s) { + return io.ErrShortWrite + } + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_map.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_map.go new file mode 100644 index 000000000000..c9544ccfae22 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_map.go @@ -0,0 +1,166 @@ +package msgpack + +import ( + "reflect" + "sort" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +func encodeMapValue(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + + if err := e.EncodeMapLen(v.Len()); err != nil { + return err + } + + for _, key := range v.MapKeys() { + if err := e.EncodeValue(key); err != nil { + return err + } + if err := e.EncodeValue(v.MapIndex(key)); err != nil { + return err + } + } + + return nil +} + +func encodeMapStringStringValue(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + + if err := e.EncodeMapLen(v.Len()); err != nil { + return err + } + + m := v.Convert(mapStringStringType).Interface().(map[string]string) + if e.sortMapKeys { + return e.encodeSortedMapStringString(m) + } + + for mk, mv := range m { + if err := e.EncodeString(mk); err != nil { + return err + } + if err := e.EncodeString(mv); err != nil { + return err + } + } + + return nil +} + +func encodeMapStringInterfaceValue(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + + if err := e.EncodeMapLen(v.Len()); err != nil { + return err + } + + m := v.Convert(mapStringInterfaceType).Interface().(map[string]interface{}) + if e.sortMapKeys { + return e.encodeSortedMapStringInterface(m) + } + + for mk, mv := range m { + if err := e.EncodeString(mk); err != nil { + return err + } + if err := e.Encode(mv); err != nil { + return err + } + } + + return nil +} + +func (e *Encoder) encodeSortedMapStringString(m map[string]string) error { + keys := make([]string, 0, len(m)) + for k, _ := range m { + keys = append(keys, k) + } + sort.Strings(keys) + + for _, k := range keys { + err := e.EncodeString(k) + if err != nil { + return err + } + if err = e.EncodeString(m[k]); err != nil { + return err + } + } + + return nil +} + +func (e *Encoder) encodeSortedMapStringInterface(m map[string]interface{}) error { + keys := make([]string, 0, len(m)) + for k, _ := range m { + keys = append(keys, k) + } + sort.Strings(keys) + + for _, k := range keys { + err := e.EncodeString(k) + if err != nil { + return err + } + if err = e.Encode(m[k]); err != nil { + return err + } + } + + return nil +} + +func (e *Encoder) EncodeMapLen(l int) error { + if l < 16 { + return e.w.WriteByte(codes.FixedMapLow | byte(l)) + } + if l < 65536 { + return e.write2(codes.Map16, uint64(l)) + } + return e.write4(codes.Map32, uint32(l)) +} + +func encodeStructValue(e *Encoder, strct reflect.Value) error { + structFields := structs.Fields(strct.Type()) + if e.structAsArray || structFields.asArray { + return encodeStructValueAsArray(e, strct, structFields.List) + } + fields := structFields.OmitEmpty(strct) + + if err := e.EncodeMapLen(len(fields)); err != nil { + return err + } + + for _, f := range fields { + if err := e.EncodeString(f.name); err != nil { + return err + } + if err := f.EncodeValue(e, strct); err != nil { + return err + } + } + + return nil +} + +func encodeStructValueAsArray(e *Encoder, strct reflect.Value, fields []*field) error { + if err := e.EncodeArrayLen(len(fields)); err != nil { + return err + } + for _, f := range fields { + if err := f.EncodeValue(e, strct); err != nil { + return err + } + } + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_number.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_number.go new file mode 100644 index 000000000000..347f56ca5d4d --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_number.go @@ -0,0 +1,138 @@ +package msgpack + +import ( + "math" + "reflect" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +func (e *Encoder) EncodeUint(v uint) error { + return e.EncodeUint64(uint64(v)) +} + +func (e *Encoder) EncodeUint8(v uint8) error { + return e.EncodeUint64(uint64(v)) +} + +func (e *Encoder) EncodeUint16(v uint16) error { + return e.EncodeUint64(uint64(v)) +} + +func (e *Encoder) EncodeUint32(v uint32) error { + return e.EncodeUint64(uint64(v)) +} + +func (e *Encoder) EncodeUint64(v uint64) error { + if v <= math.MaxInt8 { + return e.w.WriteByte(byte(v)) + } + if v <= math.MaxUint8 { + return e.write1(codes.Uint8, v) + } + if v <= math.MaxUint16 { + return e.write2(codes.Uint16, v) + } + if v <= math.MaxUint32 { + return e.write4(codes.Uint32, uint32(v)) + } + return e.write8(codes.Uint64, v) +} + +func (e *Encoder) EncodeInt(v int) error { + return e.EncodeInt64(int64(v)) +} + +func (e *Encoder) EncodeInt8(v int8) error { + return e.EncodeInt64(int64(v)) +} + +func (e *Encoder) EncodeInt16(v int16) error { + return e.EncodeInt64(int64(v)) +} + +func (e *Encoder) EncodeInt32(v int32) error { + return e.EncodeInt64(int64(v)) +} + +func (e *Encoder) EncodeInt64(v int64) error { + if v >= 0 { + return e.EncodeUint64(uint64(v)) + } + if v >= int64(int8(codes.NegFixedNumLow)) { + return e.w.WriteByte(byte(v)) + } + if v >= math.MinInt8 { + return e.write1(codes.Int8, uint64(v)) + } + if v >= math.MinInt16 { + return e.write2(codes.Int16, uint64(v)) + } + if v >= math.MinInt32 { + return e.write4(codes.Int32, uint32(v)) + } + return e.write8(codes.Int64, uint64(v)) +} + +func (e *Encoder) EncodeFloat32(n float32) error { + return e.write4(codes.Float, math.Float32bits(n)) +} + +func (e *Encoder) EncodeFloat64(n float64) error { + return e.write8(codes.Double, math.Float64bits(n)) +} + +func (e *Encoder) write1(code byte, n uint64) error { + e.buf = e.buf[:2] + e.buf[0] = code + e.buf[1] = byte(n) + return e.write(e.buf) +} + +func (e *Encoder) write2(code byte, n uint64) error { + e.buf = e.buf[:3] + e.buf[0] = code + e.buf[1] = byte(n >> 8) + e.buf[2] = byte(n) + return e.write(e.buf) +} + +func (e *Encoder) write4(code byte, n uint32) error { + e.buf = e.buf[:5] + e.buf[0] = code + e.buf[1] = byte(n >> 24) + e.buf[2] = byte(n >> 16) + e.buf[3] = byte(n >> 8) + e.buf[4] = byte(n) + return e.write(e.buf) +} + +func (e *Encoder) write8(code byte, n uint64) error { + e.buf = e.buf[:9] + e.buf[0] = code + e.buf[1] = byte(n >> 56) + e.buf[2] = byte(n >> 48) + e.buf[3] = byte(n >> 40) + e.buf[4] = byte(n >> 32) + e.buf[5] = byte(n >> 24) + e.buf[6] = byte(n >> 16) + e.buf[7] = byte(n >> 8) + e.buf[8] = byte(n) + return e.write(e.buf) +} + +func encodeInt64Value(e *Encoder, v reflect.Value) error { + return e.EncodeInt64(v.Int()) +} + +func encodeUint64Value(e *Encoder, v reflect.Value) error { + return e.EncodeUint64(v.Uint()) +} + +func encodeFloat32Value(e *Encoder, v reflect.Value) error { + return e.EncodeFloat32(float32(v.Float())) +} + +func encodeFloat64Value(e *Encoder, v reflect.Value) error { + return e.EncodeFloat64(v.Float()) +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_slice.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_slice.go new file mode 100644 index 000000000000..0a4e37d986f8 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_slice.go @@ -0,0 +1,120 @@ +package msgpack + +import ( + "reflect" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +func encodeStringValue(e *Encoder, v reflect.Value) error { + return e.EncodeString(v.String()) +} + +func encodeByteSliceValue(e *Encoder, v reflect.Value) error { + return e.EncodeBytes(v.Bytes()) +} + +func encodeByteArrayValue(e *Encoder, v reflect.Value) error { + if err := e.EncodeBytesLen(v.Len()); err != nil { + return err + } + + if v.CanAddr() { + b := v.Slice(0, v.Len()).Bytes() + return e.write(b) + } + + b := make([]byte, v.Len()) + reflect.Copy(reflect.ValueOf(b), v) + return e.write(b) +} + +func (e *Encoder) EncodeBytesLen(l int) error { + if l < 256 { + return e.write1(codes.Bin8, uint64(l)) + } + if l < 65536 { + return e.write2(codes.Bin16, uint64(l)) + } + return e.write4(codes.Bin32, uint32(l)) +} + +func (e *Encoder) encodeStrLen(l int) error { + if l < 32 { + return e.w.WriteByte(codes.FixedStrLow | uint8(l)) + } + if l < 256 { + return e.write1(codes.Str8, uint64(l)) + } + if l < 65536 { + return e.write2(codes.Str16, uint64(l)) + } + return e.write4(codes.Str32, uint32(l)) +} + +func (e *Encoder) EncodeString(v string) error { + if err := e.encodeStrLen(len(v)); err != nil { + return err + } + return e.writeString(v) +} + +func (e *Encoder) EncodeBytes(v []byte) error { + if v == nil { + return e.EncodeNil() + } + if err := e.EncodeBytesLen(len(v)); err != nil { + return err + } + return e.write(v) +} + +func (e *Encoder) EncodeArrayLen(l int) error { + if l < 16 { + return e.w.WriteByte(codes.FixedArrayLow | byte(l)) + } + if l < 65536 { + return e.write2(codes.Array16, uint64(l)) + } + return e.write4(codes.Array32, uint32(l)) +} + +// Deprecated. Use EncodeArrayLen instead. +func (e *Encoder) EncodeSliceLen(l int) error { + return e.EncodeArrayLen(l) +} + +func (e *Encoder) encodeStringSlice(s []string) error { + if s == nil { + return e.EncodeNil() + } + if err := e.EncodeArrayLen(len(s)); err != nil { + return err + } + for _, v := range s { + if err := e.EncodeString(v); err != nil { + return err + } + } + return nil +} + +func encodeSliceValue(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + return encodeArrayValue(e, v) +} + +func encodeArrayValue(e *Encoder, v reflect.Value) error { + l := v.Len() + if err := e.EncodeSliceLen(l); err != nil { + return err + } + for i := 0; i < l; i++ { + if err := e.EncodeValue(v.Index(i)); err != nil { + return err + } + } + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_value.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_value.go new file mode 100644 index 000000000000..2f5a3509a85b --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/encode_value.go @@ -0,0 +1,167 @@ +package msgpack + +import ( + "fmt" + "reflect" +) + +var valueEncoders []encoderFunc + +func init() { + valueEncoders = []encoderFunc{ + reflect.Bool: encodeBoolValue, + reflect.Int: encodeInt64Value, + reflect.Int8: encodeInt64Value, + reflect.Int16: encodeInt64Value, + reflect.Int32: encodeInt64Value, + reflect.Int64: encodeInt64Value, + reflect.Uint: encodeUint64Value, + reflect.Uint8: encodeUint64Value, + reflect.Uint16: encodeUint64Value, + reflect.Uint32: encodeUint64Value, + reflect.Uint64: encodeUint64Value, + reflect.Float32: encodeFloat32Value, + reflect.Float64: encodeFloat64Value, + reflect.Complex64: encodeUnsupportedValue, + reflect.Complex128: encodeUnsupportedValue, + reflect.Array: encodeArrayValue, + reflect.Chan: encodeUnsupportedValue, + reflect.Func: encodeUnsupportedValue, + reflect.Interface: encodeInterfaceValue, + reflect.Map: encodeMapValue, + reflect.Ptr: encodeUnsupportedValue, + reflect.Slice: encodeSliceValue, + reflect.String: encodeStringValue, + reflect.Struct: encodeStructValue, + reflect.UnsafePointer: encodeUnsupportedValue, + } +} + +func getEncoder(typ reflect.Type) encoderFunc { + if encoder, ok := typEncMap[typ]; ok { + return encoder + } + + if typ.Implements(customEncoderType) { + return encodeCustomValue + } + if typ.Implements(marshalerType) { + return marshalValue + } + + kind := typ.Kind() + + // Addressable struct field value. + if kind != reflect.Ptr { + ptr := reflect.PtrTo(typ) + if ptr.Implements(customEncoderType) { + return encodeCustomValuePtr + } + if ptr.Implements(marshalerType) { + return marshalValuePtr + } + } + + if typ == errorType { + return encodeErrorValue + } + + switch kind { + case reflect.Ptr: + return ptrEncoderFunc(typ) + case reflect.Slice: + if typ.Elem().Kind() == reflect.Uint8 { + return encodeByteSliceValue + } + case reflect.Array: + if typ.Elem().Kind() == reflect.Uint8 { + return encodeByteArrayValue + } + case reflect.Map: + if typ.Key() == stringType { + switch typ.Elem() { + case stringType: + return encodeMapStringStringValue + case interfaceType: + return encodeMapStringInterfaceValue + } + } + } + return valueEncoders[kind] +} + +func ptrEncoderFunc(typ reflect.Type) encoderFunc { + encoder := getEncoder(typ.Elem()) + return func(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + return encoder(e, v.Elem()) + } +} + +func encodeCustomValuePtr(e *Encoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) + } + encoder := v.Addr().Interface().(CustomEncoder) + return encoder.EncodeMsgpack(e) +} + +func encodeCustomValue(e *Encoder, v reflect.Value) error { + switch v.Kind() { + case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: + if v.IsNil() { + return e.EncodeNil() + } + } + + encoder := v.Interface().(CustomEncoder) + return encoder.EncodeMsgpack(e) +} + +func marshalValuePtr(e *Encoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) + } + return marshalValue(e, v.Addr()) +} + +func marshalValue(e *Encoder, v reflect.Value) error { + switch v.Kind() { + case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: + if v.IsNil() { + return e.EncodeNil() + } + } + + marshaler := v.Interface().(Marshaler) + b, err := marshaler.MarshalMsgpack() + if err != nil { + return err + } + _, err = e.w.Write(b) + return err +} + +func encodeBoolValue(e *Encoder, v reflect.Value) error { + return e.EncodeBool(v.Bool()) +} + +func encodeInterfaceValue(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + return e.EncodeValue(v.Elem()) +} + +func encodeErrorValue(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + return e.EncodeString(v.Interface().(error).Error()) +} + +func encodeUnsupportedValue(e *Encoder, v reflect.Value) error { + return fmt.Errorf("msgpack: Encode(unsupported %s)", v.Type()) +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/ext.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/ext.go new file mode 100644 index 000000000000..37ae53dd303b --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/ext.go @@ -0,0 +1,200 @@ +package msgpack + +import ( + "bytes" + "fmt" + "reflect" + "sync" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +var extTypes []reflect.Type + +var bufferPool = &sync.Pool{ + New: func() interface{} { + return new(bytes.Buffer) + }, +} + +// RegisterExt records a type, identified by a value for that type, +// under the provided id. That id will identify the concrete type of a value +// sent or received as an interface variable. Only types that will be +// transferred as implementations of interface values need to be registered. +// Expecting to be used only during initialization, it panics if the mapping +// between types and ids is not a bijection. +func RegisterExt(id int8, value interface{}) { + if diff := int(id) - len(extTypes) + 1; diff > 0 { + extTypes = append(extTypes, make([]reflect.Type, diff)...) + } + + if extTypes[id] != nil { + panic(fmt.Errorf("msgpack: ext with id=%d is already registered", id)) + } + + typ := reflect.TypeOf(value) + if typ.Kind() == reflect.Ptr { + typ = typ.Elem() + } + ptr := reflect.PtrTo(typ) + + extTypes[id] = typ + decoder := getDecoder(typ) + Register(ptr, makeExtEncoder(id, getEncoder(ptr)), decoder) + Register(typ, makeExtEncoder(id, getEncoder(typ)), decoder) +} + +func makeExtEncoder(id int8, enc encoderFunc) encoderFunc { + return func(e *Encoder, v reflect.Value) error { + buf := bufferPool.Get().(*bytes.Buffer) + defer bufferPool.Put(buf) + buf.Reset() + + oldw := e.w + e.w = buf + err := enc(e, v) + e.w = oldw + + if err != nil { + return err + } + + if err := e.encodeExtLen(buf.Len()); err != nil { + return err + } + if err := e.w.WriteByte(byte(id)); err != nil { + return err + } + return e.write(buf.Bytes()) + } +} + +func (e *Encoder) encodeExtLen(l int) error { + switch l { + case 1: + return e.w.WriteByte(codes.FixExt1) + case 2: + return e.w.WriteByte(codes.FixExt2) + case 4: + return e.w.WriteByte(codes.FixExt4) + case 8: + return e.w.WriteByte(codes.FixExt8) + case 16: + return e.w.WriteByte(codes.FixExt16) + } + if l < 256 { + return e.write1(codes.Ext8, uint64(l)) + } + if l < 65536 { + return e.write2(codes.Ext16, uint64(l)) + } + return e.write4(codes.Ext32, uint32(l)) +} + +func (d *Decoder) decodeExtLen() (int, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.parseExtLen(c) +} + +func (d *Decoder) parseExtLen(c byte) (int, error) { + switch c { + case codes.FixExt1: + return 1, nil + case codes.FixExt2: + return 2, nil + case codes.FixExt4: + return 4, nil + case codes.FixExt8: + return 8, nil + case codes.FixExt16: + return 16, nil + case codes.Ext8: + n, err := d.uint8() + return int(n), err + case codes.Ext16: + n, err := d.uint16() + return int(n), err + case codes.Ext32: + n, err := d.uint32() + return int(n), err + default: + return 0, fmt.Errorf("msgpack: invalid code %x decoding ext length", c) + } +} + +func (d *Decoder) decodeExt() (interface{}, error) { + c, err := d.readByte() + if err != nil { + return 0, err + } + return d.ext(c) +} + +func (d *Decoder) ext(c byte) (interface{}, error) { + extLen, err := d.parseExtLen(c) + if err != nil { + return nil, err + } + // Save for later use. + d.extLen = extLen + + extId, err := d.readByte() + if err != nil { + return nil, err + } + + if int(extId) >= len(extTypes) { + return nil, fmt.Errorf("msgpack: unregistered ext id=%d", extId) + } + + typ := extTypes[extId] + if typ == nil { + return nil, fmt.Errorf("msgpack: unregistered ext id=%d", extId) + } + + v := reflect.New(typ).Elem() + if err := d.DecodeValue(v); err != nil { + return nil, err + } + + return v.Interface(), nil +} + +func (d *Decoder) skipExt(c byte) error { + n, err := d.parseExtLen(c) + if err != nil { + return err + } + return d.skipN(n) +} + +func (d *Decoder) skipExtHeader(c byte) error { + // Read ext type. + _, err := d.readByte() + if err != nil { + return err + } + // Read ext body len. + for i := 0; i < extHeaderLen(c); i++ { + _, err := d.readByte() + if err != nil { + return err + } + } + return nil +} + +func extHeaderLen(c byte) int { + switch c { + case codes.Ext8: + return 1 + case codes.Ext16: + return 2 + case codes.Ext32: + return 4 + } + return 0 +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/msgpack.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/msgpack.go new file mode 100644 index 000000000000..dbb53161a2cf --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/msgpack.go @@ -0,0 +1,19 @@ +package msgpack // import "gopkg.in/vmihailenco/msgpack.v2" + +// Deprecated. Use CustomEncoder. +type Marshaler interface { + MarshalMsgpack() ([]byte, error) +} + +// Deprecated. Use CustomDecoder. +type Unmarshaler interface { + UnmarshalMsgpack([]byte) error +} + +type CustomEncoder interface { + EncodeMsgpack(*Encoder) error +} + +type CustomDecoder interface { + DecodeMsgpack(*Decoder) error +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/tags.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/tags.go new file mode 100644 index 000000000000..7377f115944a --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/tags.go @@ -0,0 +1,44 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package msgpack + +import ( + "strings" +) + +// tagOptions is the string following a comma in a struct field's "json" +// tag, or the empty string. It does not include the leading comma. +type tagOptions string + +// parseTag splits a struct field's json tag into its name and +// comma-separated options. +func parseTag(tag string) (string, tagOptions) { + if idx := strings.Index(tag, ","); idx != -1 { + return tag[:idx], tagOptions(tag[idx+1:]) + } + return tag, tagOptions("") +} + +// Contains reports whether a comma-separated list of options +// contains a particular substr flag. substr must be surrounded by a +// string boundary or commas. +func (o tagOptions) Contains(optionName string) bool { + if len(o) == 0 { + return false + } + s := string(o) + for s != "" { + var next string + i := strings.IndexRune(s, ',') + if i >= 0 { + s, next = s[:i], s[i+1:] + } + if s == optionName { + return true + } + s = next + } + return false +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/time.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/time.go new file mode 100644 index 000000000000..8728894e6afe --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/time.go @@ -0,0 +1,59 @@ +package msgpack + +import ( + "fmt" + "reflect" + "time" + + "gopkg.in/vmihailenco/msgpack.v2/codes" +) + +var timeType = reflect.TypeOf((*time.Time)(nil)).Elem() + +func init() { + Register(timeType, encodeTimeValue, decodeTimeValue) +} + +func (e *Encoder) EncodeTime(tm time.Time) error { + if err := e.w.WriteByte(codes.FixedArrayLow | 2); err != nil { + return err + } + if err := e.EncodeInt64(tm.Unix()); err != nil { + return err + } + return e.EncodeInt(tm.Nanosecond()) +} + +func (d *Decoder) DecodeTime() (time.Time, error) { + b, err := d.readByte() + if err != nil { + return time.Time{}, err + } + if b != 0x92 { + return time.Time{}, fmt.Errorf("msgpack: invalid code %x decoding time", b) + } + + sec, err := d.DecodeInt64() + if err != nil { + return time.Time{}, err + } + nsec, err := d.DecodeInt64() + if err != nil { + return time.Time{}, err + } + return time.Unix(sec, nsec), nil +} + +func encodeTimeValue(e *Encoder, v reflect.Value) error { + tm := v.Interface().(time.Time) + return e.EncodeTime(tm) +} + +func decodeTimeValue(d *Decoder, v reflect.Value) error { + tm, err := d.DecodeTime() + if err != nil { + return err + } + v.Set(reflect.ValueOf(tm)) + return nil +} diff --git a/vendor/gopkg.in/vmihailenco/msgpack.v2/types.go b/vendor/gopkg.in/vmihailenco/msgpack.v2/types.go new file mode 100644 index 000000000000..f1770b8c5ba2 --- /dev/null +++ b/vendor/gopkg.in/vmihailenco/msgpack.v2/types.go @@ -0,0 +1,214 @@ +package msgpack + +import ( + "reflect" + "sync" +) + +var errorType = reflect.TypeOf((*error)(nil)).Elem() + +var customEncoderType = reflect.TypeOf((*CustomEncoder)(nil)).Elem() +var customDecoderType = reflect.TypeOf((*CustomDecoder)(nil)).Elem() + +var marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem() +var unmarshalerType = reflect.TypeOf((*Unmarshaler)(nil)).Elem() + +type encoderFunc func(*Encoder, reflect.Value) error +type decoderFunc func(*Decoder, reflect.Value) error + +var typEncMap = make(map[reflect.Type]encoderFunc) +var typDecMap = make(map[reflect.Type]decoderFunc) + +// Register registers encoder and decoder functions for a type. +// In most cases you should prefer implementing CustomEncoder and +// CustomDecoder interfaces. +func Register(typ reflect.Type, enc encoderFunc, dec decoderFunc) { + if enc != nil { + typEncMap[typ] = enc + } + if dec != nil { + typDecMap[typ] = dec + } +} + +//------------------------------------------------------------------------------ + +var structs = newStructCache() + +type structCache struct { + l sync.RWMutex + m map[reflect.Type]*fields +} + +func newStructCache() *structCache { + return &structCache{ + m: make(map[reflect.Type]*fields), + } +} + +func (m *structCache) Fields(typ reflect.Type) *fields { + m.l.RLock() + fs, ok := m.m[typ] + m.l.RUnlock() + if !ok { + m.l.Lock() + fs, ok = m.m[typ] + if !ok { + fs = getFields(typ) + m.m[typ] = fs + } + m.l.Unlock() + } + + return fs +} + +//------------------------------------------------------------------------------ + +type field struct { + name string + index []int + omitEmpty bool + + encoder encoderFunc + decoder decoderFunc +} + +func (f *field) value(strct reflect.Value) reflect.Value { + return strct.FieldByIndex(f.index) +} + +func (f *field) Omit(strct reflect.Value) bool { + return f.omitEmpty && isEmptyValue(f.value(strct)) +} + +func (f *field) EncodeValue(e *Encoder, strct reflect.Value) error { + return f.encoder(e, f.value(strct)) +} + +func (f *field) DecodeValue(d *Decoder, strct reflect.Value) error { + return f.decoder(d, f.value(strct)) +} + +//------------------------------------------------------------------------------ + +type fields struct { + List []*field + Table map[string]*field + + asArray bool + omitEmpty bool +} + +func newFields(numField int) *fields { + return &fields{ + List: make([]*field, 0, numField), + Table: make(map[string]*field, numField), + } +} + +func (fs *fields) Len() int { + return len(fs.List) +} + +func (fs *fields) Add(field *field) { + fs.List = append(fs.List, field) + fs.Table[field.name] = field + if field.omitEmpty { + fs.omitEmpty = field.omitEmpty + } +} + +func (fs *fields) OmitEmpty(strct reflect.Value) []*field { + if !fs.omitEmpty { + return fs.List + } + + fields := make([]*field, 0, fs.Len()) + for _, f := range fs.List { + if !f.Omit(strct) { + fields = append(fields, f) + } + } + return fields +} + +func getFields(typ reflect.Type) *fields { + numField := typ.NumField() + fs := newFields(numField) + + var omitEmpty bool + for i := 0; i < numField; i++ { + f := typ.Field(i) + + name, opt := parseTag(f.Tag.Get("msgpack")) + if name == "-" { + continue + } + + if f.Name == "_msgpack" { + if opt.Contains("asArray") { + fs.asArray = true + } + if opt.Contains("omitempty") { + omitEmpty = true + } + } + + if f.PkgPath != "" && !f.Anonymous { + continue + } + + if opt.Contains("inline") { + inlineFields(fs, f) + continue + } + + if name == "" { + name = f.Name + } + field := field{ + name: name, + index: f.Index, + omitEmpty: omitEmpty || opt.Contains("omitempty"), + encoder: getEncoder(f.Type), + decoder: getDecoder(f.Type), + } + fs.Add(&field) + } + return fs +} + +func inlineFields(fs *fields, f reflect.StructField) { + typ := f.Type + if typ.Kind() == reflect.Ptr { + typ = typ.Elem() + } + inlinedFields := getFields(typ).List + for _, field := range inlinedFields { + if _, ok := fs.Table[field.name]; ok { + // Don't overwrite shadowed fields. + continue + } + field.index = append(f.Index, field.index...) + fs.Add(field) + } +} + +func isEmptyValue(v reflect.Value) bool { + switch v.Kind() { + case reflect.Array, reflect.Map, reflect.Slice, reflect.String: + return v.Len() == 0 + case reflect.Bool: + return !v.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Interface, reflect.Ptr: + return v.IsNil() + } + return false +} diff --git a/vendor/vendor.json b/vendor/vendor.json index e37769ac6255..395b96aa5bce 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -2864,6 +2864,18 @@ "revision": "44cc805cf13205b55f69e14bcb69867d1ae92f98", "revisionTime": "2016-08-05T00:47:13Z" }, + { + "checksumSHA1": "OSx+nvOfDkLfsBTZmsDUD2xw1gw=", + "path": "github.com/eclipse/paho.mqtt.golang", + "revision": "0d940dd29fd24f905cd16b28b1209b4977b97e1a", + "revisionTime": "2020-01-21T10:57:43Z" + }, + { + "checksumSHA1": "59nucqU3g1fgFTJiaIoxtKHYGmE=", + "path": "github.com/eclipse/paho.mqtt.golang/packets", + "revision": "0d940dd29fd24f905cd16b28b1209b4977b97e1a", + "revisionTime": "2020-01-21T10:57:43Z" + }, { "checksumSHA1": "Tt1KFInyaxjtvhGI75+crc75n/Q=", "path": "github.com/elastic/ecs/code/go/ecs", @@ -3448,6 +3460,15 @@ "revision": "37bf87eef99d69c4f1d3528bd66e3a87dc201472", "revisionTime": "2019-09-30T11:59:46Z" }, + { + "checksumSHA1": "xJkfP+WyfKJSBcEa+8T15QjNIr4=", + "path": "github.com/godror/godror", + "revision": "0123d49bd73e1bed106ac8b6af67f943fbbf06e2", + "revisionTime": "2020-01-12T11:05:39Z", + "tree": true, + "version": "v0.10.4", + "versionExact": "v0.10.4" + }, { "checksumSHA1": "MlaWEe1K+Kpb9wDF88qPoqO1uro=", "path": "github.com/gofrs/flock", @@ -3746,6 +3767,12 @@ "revision": "d520615e531a6bf3fb69406b9eba718261285ec8", "revisionTime": "2016-12-05T14:13:22Z" }, + { + "checksumSHA1": "ajAqUByI39Sfm99F/ZNOguPP3Mk=", + "path": "github.com/gorilla/websocket", + "revision": "c3e18be99d19e6b3e8f1559eea2c161a665c4b6b", + "revisionTime": "2019-08-25T01:20:11Z" + }, { "checksumSHA1": "dF75743hHL364Dx3HKdZbBBFrpE=", "path": "github.com/grpc-ecosystem/go-grpc-prometheus", @@ -5666,12 +5693,16 @@ "versionExact": "release-branch.go1.13" }, { - "checksumSHA1": "WvhCpgIKNQ1psrswDf1GC5hFKWM=", - "path": "google.golang.org/api/compute/v1", - "revision": "8a410c21381766a810817fd6200fce8838ecb277", - "revisionTime": "2019-11-15T18:09:15Z", - "version": "v0.14.0", - "versionExact": "v0.14.0" + "checksumSHA1": "uIgpefsunMZTr8uZTJKcevvU/yg=", + "path": "golang.org/x/xerrors", + "revision": "9bdfabe68543c54f90421aeb9a60ef8061b5b544", + "revisionTime": "2019-07-19T19:12:34Z" + }, + { + "checksumSHA1": "LnzK4nslUNXBIfAt9PbXCJCvMdA=", + "path": "golang.org/x/xerrors/internal", + "revision": "9bdfabe68543c54f90421aeb9a60ef8061b5b544", + "revisionTime": "2019-07-19T19:12:34Z" }, { "checksumSHA1": "xzYkHGnGgOHW4QNWLR4jbx+81P0=", @@ -5681,6 +5712,14 @@ "version": "v0.7.0", "versionExact": "v0.7.0" }, + { + "checksumSHA1": "WvhCpgIKNQ1psrswDf1GC5hFKWM=", + "path": "google.golang.org/api/compute/v1", + "revision": "8a410c21381766a810817fd6200fce8838ecb277", + "revisionTime": "2019-11-15T18:09:15Z", + "version": "v0.14.0", + "versionExact": "v0.14.0" + }, { "checksumSHA1": "FhzGDPlkW5SaQGtSgKnjQAiYVk0=", "path": "google.golang.org/api/gensupport", @@ -5729,6 +5768,12 @@ "version": "v0.14.0", "versionExact": "v0.14.0" }, + { + "path": "google.golang.org/api/internal/gensupport", + "revision": "02490b97dff7cfde1995bd77de808fd27053bc87", + "version": "v0.7.0", + "versionExact": "v0.7.0" + }, { "checksumSHA1": "nN+zggDyWr8HPYzwltMkzJJr1Jc=", "path": "google.golang.org/api/internal/third_party/uritemplates", @@ -5736,12 +5781,6 @@ "revisionTime": "2019-11-15T18:09:15Z", "version": "v0.14.0", "versionExact": "v0.14.0" - }, - { - "path": "google.golang.org/api/internal/gensupport", - "revision": "02490b97dff7cfde1995bd77de808fd27053bc87", - "version": "v0.7.0", - "versionExact": "v0.7.0" }, { "checksumSHA1": "zh9AcT6oNvhnOqb7w7njY48TkvI=", @@ -5823,6 +5862,30 @@ "version": "v1.6.5", "versionExact": "v1.6.5" }, + { + "checksumSHA1": "5XkGWfndn+Th+AOFw9u7xLO8Qp8=", + "path": "google.golang.org/appengine/datastore", + "revision": "b2f4a3cf3c67576a2ee09e1fe62656a5086ce880", + "revisionTime": "2019-06-06T17:30:15Z", + "version": "v1.6.1", + "versionExact": "v1.6.1" + }, + { + "checksumSHA1": "BmZtAIDebUf0ivXIeQ33/3H4094=", + "path": "google.golang.org/appengine/datastore/internal/cloudkey", + "revision": "b2f4a3cf3c67576a2ee09e1fe62656a5086ce880", + "revisionTime": "2019-06-06T17:30:15Z", + "version": "v1.6.1", + "versionExact": "v1.6.1" + }, + { + "checksumSHA1": "mqxlap2M+qVsEvWvXZ4hOfwLVvw=", + "path": "google.golang.org/appengine/datastore/internal/cloudpb", + "revision": "b2f4a3cf3c67576a2ee09e1fe62656a5086ce880", + "revisionTime": "2019-06-06T17:30:15Z", + "version": "v1.6.1", + "versionExact": "v1.6.1" + }, { "checksumSHA1": "/R9+Y0jX9ijye8Ea6oh/RBwOOg4=", "path": "google.golang.org/appengine/internal", @@ -6394,12 +6457,6 @@ "version": "v1.25.1", "versionExact": "v1.25.1" }, - { - "checksumSHA1": "+/UD9mGRnKxOhZW3+B+VJdIIPn8=", - "path": "gopkg.in/goracle.v2", - "revision": "3222d7159b45fce95150f06a57e1bcc2868108d3", - "revisionTime": "2019-05-30T18:40:54Z" - }, { "checksumSHA1": "6f8MEU31llHM1sLM/GGH4/Qxu0A=", "path": "gopkg.in/inf.v0", @@ -6640,6 +6697,18 @@ "revision": "3f83fa5005286a7fe593b055f0d7771a7dce4655", "revisionTime": "2016-08-18T02:01:20Z" }, + { + "checksumSHA1": "9zWLKf1I9P/EOJFQwdlf0oWBWdU=", + "path": "gopkg.in/vmihailenco/msgpack.v2", + "revision": "f4f8982de4ef0de18be76456617cc3f5d8d8141e", + "revisionTime": "2017-05-02T10:41:14Z" + }, + { + "checksumSHA1": "KqVHe5SB85yaDcseNq2CogozjRk=", + "path": "gopkg.in/vmihailenco/msgpack.v2/codes", + "revision": "f4f8982de4ef0de18be76456617cc3f5d8d8141e", + "revisionTime": "2017-05-02T10:41:14Z" + }, { "checksumSHA1": "ZSWoOPUNRr5+3dhkLK3C4cZAQPk=", "path": "gopkg.in/yaml.v2", diff --git a/winlogbeat/winlogbeat.reference.yml b/winlogbeat/winlogbeat.reference.yml index 81eab60bc5f2..fb7948a65791 100644 --- a/winlogbeat/winlogbeat.reference.yml +++ b/winlogbeat/winlogbeat.reference.yml @@ -1260,6 +1260,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/x-pack/auditbeat/auditbeat.reference.yml b/x-pack/auditbeat/auditbeat.reference.yml index 55b57e87ec1f..ae6b7f1b1a3f 100644 --- a/x-pack/auditbeat/auditbeat.reference.yml +++ b/x-pack/auditbeat/auditbeat.reference.yml @@ -1388,6 +1388,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/x-pack/auditbeat/docs/modules/system.asciidoc b/x-pack/auditbeat/docs/modules/system.asciidoc index dc80845aa249..361af353c439 100644 --- a/x-pack/auditbeat/docs/modules/system.asciidoc +++ b/x-pack/auditbeat/docs/modules/system.asciidoc @@ -2,9 +2,10 @@ This file is generated! See scripts/docs_collector.py //// +:modulename: system + [id="{beatname_lc}-module-system"] [role="xpack"] - == System Module beta[] @@ -72,8 +73,9 @@ sample suggested configuration. user.detect_password_changes: true ---- -*`period`*:: The frequency at which the datasets check for changes. For most -datasets - esp. `process` and `socket` - a shorter period is recommended. +This module also supports the +<> +described later. *`state.period`*:: The frequency at which the datasets send full state information. This option can be overridden per dataset using `{dataset}.state.period`. @@ -85,8 +87,7 @@ the `beat.db` file to detect changes between Auditbeat restarts. The `beat.db` f should be readable only by the root user and be treated similar to the shadow file itself. -*`keep_null`*:: If this option is set to true, fields with `null` values will be -published in the output document. By default, `keep_null` is set to `false`. +include::{docdir}/auditbeat-options.asciidoc[] [float] === Suggested configuration @@ -151,6 +152,9 @@ auditbeat.modules: login.btmp_file_pattern: /var/log/btmp* ---- + +:modulename!: + [float] === Datasets diff --git a/x-pack/auditbeat/module/system/_meta/docs.asciidoc b/x-pack/auditbeat/module/system/_meta/docs.asciidoc index 30d97edb4781..2e91d2db1164 100644 --- a/x-pack/auditbeat/module/system/_meta/docs.asciidoc +++ b/x-pack/auditbeat/module/system/_meta/docs.asciidoc @@ -1,5 +1,4 @@ [role="xpack"] - == System Module beta[] @@ -67,8 +66,9 @@ sample suggested configuration. user.detect_password_changes: true ---- -*`period`*:: The frequency at which the datasets check for changes. For most -datasets - esp. `process` and `socket` - a shorter period is recommended. +This module also supports the +<> +described later. *`state.period`*:: The frequency at which the datasets send full state information. This option can be overridden per dataset using `{dataset}.state.period`. @@ -80,8 +80,7 @@ the `beat.db` file to detect changes between Auditbeat restarts. The `beat.db` f should be readable only by the root user and be treated similar to the shadow file itself. -*`keep_null`*:: If this option is set to true, fields with `null` values will be -published in the output document. By default, `keep_null` is set to `false`. +include::{docdir}/auditbeat-options.asciidoc[] [float] === Suggested configuration diff --git a/x-pack/filebeat/filebeat.reference.yml b/x-pack/filebeat/filebeat.reference.yml index 28f911bfa36f..b448afc21955 100644 --- a/x-pack/filebeat/filebeat.reference.yml +++ b/x-pack/filebeat/filebeat.reference.yml @@ -2506,6 +2506,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/x-pack/filebeat/input/netflow/_meta/fields.yml b/x-pack/filebeat/input/netflow/_meta/fields.yml index d88ab6d5ab9b..f5a4c0823d58 100644 --- a/x-pack/filebeat/input/netflow/_meta/fields.yml +++ b/x-pack/filebeat/input/netflow/_meta/fields.yml @@ -194,7 +194,7 @@ type: long - name: class_id - type: short + type: long - name: minimum_ttl type: short diff --git a/x-pack/filebeat/input/netflow/convert.go b/x-pack/filebeat/input/netflow/convert.go index dbb02c6803a2..64d264f57ea2 100644 --- a/x-pack/filebeat/input/netflow/convert.go +++ b/x-pack/filebeat/input/netflow/convert.go @@ -245,22 +245,10 @@ func flowToBeatEvent(flow record.Record) (event beat.Event) { ecsNetwork["transport"] = IPProtocol(proto).String() ecsNetwork["iana_number"] = proto } - countBytes, hasBytes := getKeyUint64(flow.Fields, "octetDeltaCount") - if !hasBytes { - countBytes, hasBytes = getKeyUint64(flow.Fields, "octetTotalCount") - } - countPkts, hasPkts := getKeyUint64(flow.Fields, "packetDeltaCount") - if !hasPkts { - countPkts, hasPkts = getKeyUint64(flow.Fields, "packetTotalCount") - } - revBytes, hasRevBytes := getKeyUint64(flow.Fields, "reverseOctetDeltaCount") - if !hasRevBytes { - revBytes, hasRevBytes = getKeyUint64(flow.Fields, "reverseOctetTotalCount") - } - revPkts, hasRevPkts := getKeyUint64(flow.Fields, "reversePacketDeltaCount") - if !hasRevPkts { - revPkts, hasRevPkts = getKeyUint64(flow.Fields, "reversePacketTotalCount") - } + countBytes, hasBytes := getKeyUint64Alternatives(flow.Fields, "octetDeltaCount", "octetTotalCount", "initiatorOctets") + countPkts, hasPkts := getKeyUint64Alternatives(flow.Fields, "packetDeltaCount", "packetTotalCount", "initiatorPackets") + revBytes, hasRevBytes := getKeyUint64Alternatives(flow.Fields, "reverseOctetDeltaCount", "reverseOctetTotalCount", "responderOctets") + revPkts, hasRevPkts := getKeyUint64Alternatives(flow.Fields, "reversePacketDeltaCount", "reversePacketTotalCount", "responderPackets") if hasRevBytes { ecsDest["bytes"] = revBytes @@ -337,6 +325,18 @@ func getKeyUint64(dict record.Map, key string) (value uint64, found bool) { return } +func getKeyUint64Alternatives(dict record.Map, keys ...string) (value uint64, found bool) { + var iface interface{} + for _, key := range keys { + if iface, found = dict[key]; found { + if value, found = iface.(uint64); found { + return + } + } + } + return +} + func getKeyString(dict record.Map, key string) (value string, found bool) { iface, found := dict[key] if !found { diff --git a/x-pack/filebeat/input/netflow/decoder/fields/cisco.csv b/x-pack/filebeat/input/netflow/decoder/fields/cisco.csv index 653a275d06f8..65e5f9b5b75c 100644 --- a/x-pack/filebeat/input/netflow/decoder/fields/cisco.csv +++ b/x-pack/filebeat/input/netflow/decoder/fields/cisco.csv @@ -270,6 +270,20 @@ netscalerUnknown465,5951,465,unsigned32 ingressAclID,0,33000,aclid egressAclID,0,33001,aclid fwExtEvent,0,33002,unsigned16 +fwEventLevel,0,33003,unsigned32 +fwEventLevelID,0,33004,unsigned32 +fwConfiguredValue,0,33005,unsigned32 +fwCtsSrcSGT,0,34000,unsigned32 +fwExtEventAlt,0,35001,unsigned32 +fwBlackoutSecs,0,35004,unsigned32 +fwHalfOpenHigh,0,35005,unsigned32 +fwHalfOpenRate,0,35006,unsigned32 +fwZonePairID,0,35007,unsigned32 +fwMaxSessions,0,35008,unsigned32 +fwZonePairName,0,35009,unsigned32 +fwExtEventDesc,0,35010,string +fwSummaryPktCount,0,35011,unsigned32 +fwHalfOpenCount,0,35012,unsigned32 username,0,40000,string XlateSourceAddressIPV4,0,40001,ipv4Address XlateDestinationAddressIPV4,0,40002,ipv4Address diff --git a/x-pack/filebeat/input/netflow/decoder/fields/doc.go b/x-pack/filebeat/input/netflow/decoder/fields/doc.go index cf6ddacf37ff..d3ae43e45212 100644 --- a/x-pack/filebeat/input/netflow/decoder/fields/doc.go +++ b/x-pack/filebeat/input/netflow/decoder/fields/doc.go @@ -8,3 +8,4 @@ package fields //go:generate go run gen.go -output zfields_cert.go -export CertFields --column-pen=1 --column-id=2 --column-name=3 --column-type=4 cert_pen6871.csv //go:generate go run gen.go -output zfields_cisco.go -export CiscoFields --column-pen=2 --column-id=3 --column-name=1 --column-type=4 cisco.csv //go:generate go run gen.go -output zfields_assorted.go -export AssortedFields --column-pen=1 --column-id=2 --column-name=3 --column-type=4 assorted.csv +//go:generate go fmt diff --git a/x-pack/filebeat/input/netflow/decoder/fields/ipfix-information-elements.csv b/x-pack/filebeat/input/netflow/decoder/fields/ipfix-information-elements.csv index 974d95dbce33..c2c5143f96b4 100644 --- a/x-pack/filebeat/input/netflow/decoder/fields/ipfix-information-elements.csv +++ b/x-pack/filebeat/input/netflow/decoder/fields/ipfix-information-elements.csv @@ -1,3 +1,9 @@ +; WARNING: This is an edited version of the original IANA document! +; +; Changes +; ======= +; 2020-01-14 - @adriansr: Change field 51 (classId) from unsigned8 to unsigned32 +; ;ElementID,Name,Abstract Data Type,Data Type Semantics,Status,Description,Units,Range,References,Requester,Revision,Date 0,Reserved,,,,,,,,[RFC5102],,2013-02-18 1,octetDeltaCount,unsigned64,deltaCounter,current,"The number of octets since the previous report (if any) @@ -335,7 +341,7 @@ Sampling. Use with samplerRandomInterval.",,,,[RFC7270],0,2014-04-04 50,samplerRandomInterval,unsigned32,quantity,deprecated,"Deprecated in favor of 305 samplingPacketInterval. Packet interval at which to sample -- in case of random sampling. Used in connection with the samplerMode 0x02 (random sampling) value.",,,,[RFC7270],0,2014-04-04 -51,classId,unsigned8,identifier,deprecated,"Deprecated in favor of 302 selectorId. Characterizes the traffic +51,classId,unsigned32,identifier,deprecated,"Deprecated in favor of 302 selectorId. Characterizes the traffic class, i.e., QoS treatment.",,,,[RFC7270],0,2014-04-04 52,minimumTTL,unsigned8,,current,Minimum TTL value observed for any packet in this Flow.,hops,,"See [RFC791] for the definition of the IPv4 Time to Live field. diff --git a/x-pack/filebeat/input/netflow/decoder/fields/zfields_cisco.go b/x-pack/filebeat/input/netflow/decoder/fields/zfields_cisco.go index ae37275d5281..7d1abc1b62c5 100644 --- a/x-pack/filebeat/input/netflow/decoder/fields/zfields_cisco.go +++ b/x-pack/filebeat/input/netflow/decoder/fields/zfields_cisco.go @@ -280,6 +280,20 @@ var CiscoFields = FieldDict{ Key{EnterpriseID: 0, FieldID: 33000}: {Name: "ingressAclID", Decoder: ACLID}, Key{EnterpriseID: 0, FieldID: 33001}: {Name: "egressAclID", Decoder: ACLID}, Key{EnterpriseID: 0, FieldID: 33002}: {Name: "fwExtEvent", Decoder: Unsigned16}, + Key{EnterpriseID: 0, FieldID: 33003}: {Name: "fwEventLevel", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 33004}: {Name: "fwEventLevelID", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 33005}: {Name: "fwConfiguredValue", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 34000}: {Name: "fwCtsSrcSGT", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35001}: {Name: "fwExtEventAlt", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35004}: {Name: "fwBlackoutSecs", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35005}: {Name: "fwHalfOpenHigh", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35006}: {Name: "fwHalfOpenRate", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35007}: {Name: "fwZonePairID", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35008}: {Name: "fwMaxSessions", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35009}: {Name: "fwZonePairName", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35010}: {Name: "fwExtEventDesc", Decoder: String}, + Key{EnterpriseID: 0, FieldID: 35011}: {Name: "fwSummaryPktCount", Decoder: Unsigned32}, + Key{EnterpriseID: 0, FieldID: 35012}: {Name: "fwHalfOpenCount", Decoder: Unsigned32}, Key{EnterpriseID: 0, FieldID: 40000}: {Name: "username", Decoder: String}, Key{EnterpriseID: 0, FieldID: 40001}: {Name: "XlateSourceAddressIPV4", Decoder: Ipv4Address}, Key{EnterpriseID: 0, FieldID: 40002}: {Name: "XlateDestinationAddressIPV4", Decoder: Ipv4Address}, diff --git a/x-pack/filebeat/input/netflow/decoder/fields/zfields_ipfix.go b/x-pack/filebeat/input/netflow/decoder/fields/zfields_ipfix.go index 1a47ad39c269..045ec5c3499b 100644 --- a/x-pack/filebeat/input/netflow/decoder/fields/zfields_ipfix.go +++ b/x-pack/filebeat/input/netflow/decoder/fields/zfields_ipfix.go @@ -59,7 +59,7 @@ var IpfixFields = FieldDict{ Key{EnterpriseID: 0, FieldID: 48}: {Name: "samplerId", Decoder: Unsigned8}, Key{EnterpriseID: 0, FieldID: 49}: {Name: "samplerMode", Decoder: Unsigned8}, Key{EnterpriseID: 0, FieldID: 50}: {Name: "samplerRandomInterval", Decoder: Unsigned32}, - Key{EnterpriseID: 0, FieldID: 51}: {Name: "classId", Decoder: Unsigned8}, + Key{EnterpriseID: 0, FieldID: 51}: {Name: "classId", Decoder: Unsigned32}, Key{EnterpriseID: 0, FieldID: 52}: {Name: "minimumTTL", Decoder: Unsigned8}, Key{EnterpriseID: 0, FieldID: 53}: {Name: "maximumTTL", Decoder: Unsigned8}, Key{EnterpriseID: 0, FieldID: 54}: {Name: "fragmentIdentification", Decoder: Unsigned32}, diff --git a/x-pack/filebeat/input/netflow/decoder/ipfix/decoder.go b/x-pack/filebeat/input/netflow/decoder/ipfix/decoder.go index 664837755211..9c0252cb9c4e 100644 --- a/x-pack/filebeat/input/netflow/decoder/ipfix/decoder.go +++ b/x-pack/filebeat/input/netflow/decoder/ipfix/decoder.go @@ -108,6 +108,7 @@ func (d DecoderIPFIX) ReadOptionsTemplateFlowSet(buf *bytes.Buffer) (templates [ } template.ID = tID template.ScopeFields = scopeCount + template.IsOptions = true templates = append(templates, &template) } return templates, nil diff --git a/x-pack/filebeat/input/netflow/decoder/template/template.go b/x-pack/filebeat/input/netflow/decoder/template/template.go index dc42bcc26f07..e04dc93cfe36 100644 --- a/x-pack/filebeat/input/netflow/decoder/template/template.go +++ b/x-pack/filebeat/input/netflow/decoder/template/template.go @@ -29,6 +29,9 @@ type Template struct { Length int VariableLength bool ScopeFields int + // IsOptions signals that this is an options template. Previously + // ScopeFields>0 was used for this, but that's unreliable under v9. + IsOptions bool } type FieldTemplate struct { @@ -84,7 +87,7 @@ func (t *Template) Apply(data *bytes.Buffer, n int) ([]record.Record, error) { } } makeFn := t.makeFlow - if t.ScopeFields > 0 { + if t.IsOptions { makeFn = t.makeOptions } events := make([]record.Record, 0, alloc) diff --git a/x-pack/filebeat/input/netflow/decoder/template/template_test.go b/x-pack/filebeat/input/netflow/decoder/template/template_test.go index 6ab22766170e..882756dc9a2a 100644 --- a/x-pack/filebeat/input/netflow/decoder/template/template_test.go +++ b/x-pack/filebeat/input/netflow/decoder/template/template_test.go @@ -313,6 +313,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { record: Template{ Length: 7, ScopeFields: 1, + IsOptions: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, {Length: 2, Info: &fields.Field{Name: "destinationTransportPort", Decoder: fields.Unsigned16}}, @@ -343,6 +344,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { record: Template{ Length: 7, ScopeFields: 2, + IsOptions: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, {Length: 2, Info: &fields.Field{Name: "destinationTransportPort", Decoder: fields.Unsigned16}}, @@ -386,6 +388,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { record: Template{ Length: 7, ScopeFields: 3, + IsOptions: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, {Length: 2, Info: &fields.Field{Name: "destinationTransportPort", Decoder: fields.Unsigned16}}, @@ -415,6 +418,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { record: Template{ Length: 7, ScopeFields: 1, + IsOptions: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, {Length: 2, Info: &fields.Field{Name: "destinationTransportPort", Decoder: fields.Unsigned16}}, @@ -446,6 +450,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { record: Template{ Length: 7, ScopeFields: 2, + IsOptions: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, {Length: 2, Info: &fields.Field{Name: "destinationTransportPort", Decoder: fields.Unsigned16}}, @@ -489,6 +494,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { record: Template{ Length: 6, ScopeFields: 1, + IsOptions: true, VariableLength: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, @@ -522,6 +528,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { record: Template{ Length: 6, ScopeFields: 1, + IsOptions: true, VariableLength: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, @@ -571,6 +578,7 @@ func TestOptionsTemplate_Apply(t *testing.T) { Length: 6, VariableLength: true, ScopeFields: 2, + IsOptions: true, Fields: []FieldTemplate{ {Length: 4, Info: &fields.Field{Name: "sourceIPv4Address", Decoder: fields.Ipv4Address}}, {Length: VariableLength, Info: &fields.Field{Name: "vpnIdentifier", Decoder: fields.OctetArray}}, diff --git a/x-pack/filebeat/input/netflow/decoder/v9/decoder.go b/x-pack/filebeat/input/netflow/decoder/v9/decoder.go index f59726846525..6b4fc9be8be7 100644 --- a/x-pack/filebeat/input/netflow/decoder/v9/decoder.go +++ b/x-pack/filebeat/input/netflow/decoder/v9/decoder.go @@ -184,7 +184,7 @@ func (d DecoderV9) ReadOptionsTemplateFlowSet(buf *bytes.Buffer) (templates []*t if buf.Len() < int(length) { return nil, io.EOF } - if scopeLen == 0 || scopeLen&3 != 0 || optsLen&3 != 0 { + if (scopeLen+optsLen) == 0 || scopeLen&3 != 0 || optsLen&3 != 0 { return nil, fmt.Errorf("bad length for options template. scope=%d options=%d", scopeLen, optsLen) } template, err := ReadFields(d, buf, (scopeLen+optsLen)/4) @@ -193,6 +193,7 @@ func (d DecoderV9) ReadOptionsTemplateFlowSet(buf *bytes.Buffer) (templates []*t } template.ID = tID template.ScopeFields = scopeLen / 4 + template.IsOptions = true templates = append(templates, &template) } return templates, nil diff --git a/x-pack/filebeat/input/netflow/fields.go b/x-pack/filebeat/input/netflow/fields.go index e55936179242..92226a14406d 100644 --- a/x-pack/filebeat/input/netflow/fields.go +++ b/x-pack/filebeat/input/netflow/fields.go @@ -19,5 +19,5 @@ func init() { // AssetNetflow returns asset data. // This is the base64 encoded gzipped contents of input/netflow. func AssetNetflow() string { - return "eJysXUuT5Cbyv8+nUPjyv/w94dfsYw572nWsD7vrgw97IxDKknBLQAOq6vKn3wAkFVWCahL1HBzhnv79Mnkl+ULzbfMC16+NAHsa5eVT01huR/jafPNvsD+P8vLNp6bpwDDNleVSfG3+9qlpmuZnDmNnmpOWU7P8ZkNF1/zy68+//LdxVObzp6Y5+V/76iHfNoJOEItyf+xVwdem13JWy08S0t6V+Hn5tVheLNNJ2X64Cn2B60XqLvp5RrT789sAHtbI0yZeA5O6W1AtdE17bezATQNnEPbzp50a8KaktqB3qsTjf0eRf4GlHbW00TBSC11jZWMH2LibDs6cQWMHapseBOjwW06voPDniO9xwmJtaddpMObu7/Jz947a7s8/FhX/z7hNcJH6ZZXRcNH88utX99fNSeqJxrMX62TkrBkQ/ig5aDVK0eNU+k9rQJ+p++umkxN1evzdTell4GyIZ61pwdGbjGKWT2AsnVRSsY5awCn2G5/A728HdZsurG9G+qycfDLxceTpBcNPzT/lxaPud5fSkrkFG6hpWgDR6FkILvr/d0sY5AOTosvN0xm04VIkdeTCQn93OgrUXA/jQtzMBrrE0ZPMgiUdjJYSJmdhd2fQz9AOpyh7qQIGhLN0OHlaWsnkSHgHwvITT1gLM0ht91CuCBupMUSeiNvVnO1tXgZqmSJMCqvlSFpuzQ63Ls0OuRxIq6kwbosQ9x80nKvzT2Rvbxaweo5TGk78jYwgejsUT5bonTDiNNMnmpiq3LIay4U3GNWDjjnQI9+Bq4YPdaPnigh4s2SQCq952yuyrBs1RMxTm9jbabkOGg+8Bl+vuJLGkolRY0m1OYg4ak2RtyUgOmKuhsyKOJOPgRpLta0Ae9Wr7ac8MmsTF3yaJ8IVsdLSMbfLM2j6dgB9MzF/OnJAceBYaNXB3kmvYvH7ZaQtjJ6k1DqwSRH3C4TJLhjncqPI+wVcqqOhkxq56IMRO9OxdF1XHB17qbkdJtSsUGb5Gfz5kTPC6Hsw78YKKIieC0BNzgK585OfA4KD1y0HPZwXzFHdCCYwhvZwhMLPVXC7K2i8fddytqCJYZXuxJG7uAg7qdEQK9VyzDCL+wDFO0/uBIBG7I4VMckOdz5BE01FJyfsKQ2ebLmG601h7V5CDrJcDwjISdN+AmE375z5xcfchQe89GWXTpRll3uNy9Oy4/1axXIeqUgtStZ0ebFoFFdkHye+b5o7roEl1yMfMcVeIe6qfvQpcWgPgDcLwg2TDEA70HtoxrG5NwDGUvZCDG7sniOB/+EowY9HCfZOA5Lgy1GCvcODJPjzUYK/HCX461GC77+rcTnrbdMR47aFscT9fw0uSjKVw9cbDie01sFagig88DTS3hDqAsf8xZ+BrvedPJ0MYLxdqS9Ud87NNpbaeb+Yz7bjWYngwJGOu73Vz9wM5YmwB/NYFQEZzYjV9HTijHDRwd6py+SHjK3CUaXGxZuo24wxQbnbFKNw23h1Zjp+Co5MCPqU5ImNmVuoebTc50Q03PQ4UWZlaWYn+FI41T1mc94IOlYa6RX0D8t5XMwl+oQkSQz/Yz8OHEVHLS0dyObC0O53ytw5r0msKQ3neoa1soAPY2IkMkWjpVJbrFuR21/w9cWBOwXwJv1BgZo7YcklaqCm3GlkcpqkIEpLBdpySEZoaYnyVmkLRqIcus8r7R207CHZB8u4vTI628Co7sr19SWB4t+ewIJ2F+VSWitHhgNQBbUwqZHapNXLTuXFxXFsoEK4iSw2lx5mTAKQtdBLsqxia4UiLsLDiTLjoXC5A/ribf4EVcCCvLhaihVai10lMy2rJddhg2RBRa3gSmiQG0z0U93fMZTVDOZqLEyEC25JVKXHj6Sb9RJkPSN4MoyIAD2KcMbW3GxFUrYXUh+5sVaC2itTSHfCha0dwIavHsHGUH3rp2s7RdfYvrBUBItqfxURa1T7q3RRXuDqwijnqheHBPtybaXqcbm2YsNuXkuyEJZLSTpUvnz2DLXKKs5e3ckqRs3dVsbHtT04YLyFcWjLKsU64EGx8DqDYIALbBySshchLyN0Pfg0CprgwkXnLjFUXOiAs+69sZI+x4XDhmQ0tqGmEhdX5pElY0Wvo6RdBEbEBVxhCj8h0R/GhzksPvq417PUfBxLrXAflzPo3MYtBW3pPp8yLEXFUbSZJ1/Yep2phmLfIrKwVQSPtVREDfAx+41rMHlEd6Bw4JvS8KYwa1uzpZwBXqvzGWD+uBiyZevK9Tz/RPhQvBj+96XPe5auvDNXOISinU9L+01XvMeZHEdgVlakqe6guNxDCPDRXXoLbGslRdZSF3jU3bgQ4SertkNyS+3VEvhr+yoq3DaHPPFapDY1rqJDKjNUIp3hqUPOuq/qr6lpp/MetaBboa+uC9MRHGqfXVjUpkftDrsRfURLsBuX1Lz3PKJfh0Q00LG4P82R+KcnxQZXcMtdZJWzhuml1GCUFM4VQsFOXMOFjiNWx9CqfNan8sQoHnLW5FRRAKu/xGPvntER4d63vK7bBewAWoCt89M39DuuR96mrwRPjUe+aV7a71/xrUQBpjSXmttr6WADis3Gygl0rdQNjxU/gdWSwJmlhGY35A1lEc2EysDcSXLhOlkMyBjzCJQUljebEXJ9cfLwkuzp3bNYAzVcDWd0xPtGB/Ghj65uL0bY+q11O0a4eV/K9figcPHm/O1Wmb5ePErDrCLGaqATasgTfSMrBUquA1bWadY4aeq+EDYAezFz8T28Yg2TiKZeLioHyQWprmJJRV9nQAZDBoxvWsSNL16LiiLSAx69B+/w+FrSwyzXDOCRAT+Eewb8IG6BGQO9tC+XPylYY7IKbEctXRr6nU87ctryMXUhtlKOQMXzGm9oH8BccgIu7rYRa48MumEjApt5ulXScEW0iMUHCpTl2rWKYyhcQiEXQ+FYHIGSMtkDkCl9rQici02FFNeJ/7H0hSWTktlb4x5sgQ2Cv86IK5OL8BTd94iNIYGfbunL37o/qCBZjrLf7/bswO3seyxqoCCYvioLXRW67RU505F33F5952bxEePKHQZiFC/cE70G8gJ73fI7CHPib6FtqAHiY1scLtGDgtvrBlYvays3FR+vgJWIxt7k3ka4ZJvE/CO6PHZ9gLdUZ2vf762Nj6rce9+w3pOtFezBVWKTdbz3hinVPGKeGd2QWra5i7aTczsmXAR/U49cvJCTpm6YqMrjrQh435WKMCJreqGOYKf+xzyKQeGPjSA6y0umRL1YQ9b+G7xFiFnCj4tZaGvkOFsgoHWi/SK3h/yXafgZC4tNaDhiuFDmEV7hYSc40H7+jgPvp3e8B2PJQM3gbuOE05JeLw+IzlCmJ70MjLBVHipnq2ZLNBUubOal1iqBpaWPJjx23dN1kh/RONnLQgX1y4OZMNXeQ3E7VKMWeTvaOP9iVsrFbpyMfOJ7XXOHcpSXGhiT4sR9momMcIb9FZsDphwTb9Rxzl+CpOZRTYoIN+0phrDVWuiLt2qeBcQRl8/ARIXlrDj1kyKZBeqLQUrzM3V3ivO+lOYG2Vd15trO1F/QIXzdHvGVt6TnOXCr+8gzz/U64CSHtz/EQJ/z3dOzt+BqX77cwStKx0v6fhb8WH/mSnR7QXaYqtWSdseo4INGBx+m0SQFt1LHn0mJ84/YVHOCbUvHIrl8pjvYMK8QwoDckCnrV4IzFhQytInQYp58eR/zmTRL6542OyC6yeFiVe1D6ofvgqWv3ae7tgqr7ftttvnGYoN+hB2/e2XUQi/19QCFmduPoPGfAcU+Jw9dI6OvDBiiNJhUd0Uma3UPXpJmxfHhA5zJSY1QDg+Pu5kNFs23fa557ezWLWW6T3AfpitpWqpT7QMIS6oBlcphCX0g41/ta97O4bER2EEWe0X6xH788uU78ju3FnTNU6cdA/qt0wPDs/g9M62+xt/BPhWeDQkf2gJQ2FATPJCRjQgOvIq+Z6l+G31PE1qd0CT7/FcwUYfTaIEGmUfbMrLhXWtl2jlMjKKM333K9/25PJL0TjDgVPC5jrCKvixSvitvvZS1OzLqxqzdjdsVV/+dgiQFruK6URx56uxXfoLO3fD4F+Pr+8uolF/zhPEhP590EZ9Uneq+tbHi7j9yiHDfg3VePtdGuDCWOkfV0v0peNpRuGMonv4M/mBj45GPLu1urI/4CtTB9ET0bPMjmA5mO3LaVITxy9cNF5oD38KtZ1i/OXJwYpM09c/BD9Jsb7IP8iQmpeolW0KNIzzByFan+RYbjd/4UZPvodcaD6tcq8/EWyLb34HZUGkh+39p4J0LYMcQVsiFOQlPKNvi+UiC+b7pAzb5mf5S8JOIOPmt2ke8n/zijP0O3tO5L80M7cDeB7WcvRS/Dn1kmIXhvSj23iM8/h9lcGAzt8+Qeam+2wz9/QOHZFTZWcNaR0dWeTyDFBbeLP7raDEYl8qKJroaWFXmi/DmKmyiBvwUOsluHrGlo0m2fATCJ7OP9N8DTYabDjG+wfrGe2pn45+SIzzn1CM5kykGZ1m8fA2vMziHKJ0Peq78Ch5k4vlxGdRS3SeChufgtZU/94Q2ixbUVnj5PkRzcV7V+z94O4J2Gr/O0lICbwyggy7zUO9Jg+agwQxyxCH9RPscPu1TqOcr5E1Nrh38vZ1BjRREDZoazMmlb2R9AgHCal7+nQL6RlreVqAWBFGg/VQhoGZuw79PVv55afpG1k9BOJHCP5h3s2VgasfSlt11F/rG74H3w21/1DD4zzZVE2hLJqqUG8hBVSKmD1NpXdY63fpRttGpPzTOsxIFXs7/AgAA///VU2iI" + return "eJysXU2T5CbSvs+vUPjyXl5P+Gv2Yw572nWsD7vrgw97IxDKknBLQAOq6vKv3wAkFVWCahL1HBzhnn4eEkiS/ELzbfMC16+NAHsa5eVT01huR/jafPNvsD+P8vLNp6bpwDDNleVSfG3+9qlpmuZnDmNnmpOWU7P8ZkNF1/zy68+//LdxVObzp6Y5+V/76iHfNoJOEA/l/tirgq9Nr+Wslp8kRnt3xM/Lr8XjxWO6UbYfroO+wPUidRf9PDO0+/PbAB7WyNM2vAYmdbegWuia9trYgZsGziDs5087MeBNSW1B70SJ5/+OIP8CSztqaaNhpBa6xsrGDrBxNx2cOYPGDtQ2PQjQ4becXEHgzxHf44LF0tKu02DM3d/l1+4dsd2ffywi/p9xSnCR+mUdo+Gi+eXXr+6vm5PUE41XL5bJyFkzIPxx5CDVKEWPE+k/rQF9pu6vm05O1Mnxd7ekl4GzIV61pgVHbzKCWT6BsXRSScE6agEn2G98Aq/fDuqULuxvZvRZufHJxMeRpzcMvzT/lBePutcupSVzGzZQ07QAotGzEFz0/++2MIwPTIout05n0IZLkZSRCwv93ekoEHM9jAtxMxvoEkdPMguWdDBaSpichd2dQb9CO5yi7KUKGBDO0uHG09JKJkfCOxCWn3jCWphBaruHckXYSI0h8kScVnO2t3kZqGWKMCmsliNpuTU73Lo1O+RyIK2mwjgVIe4/aDhX55/I3t4sYPUcpzSc+BsZQfR2KF4s0bvBiJNMn2hiqXLbaiwX3mBUTzrmQM98B66aPtTNnisi4M2SQSq85G2vyLJv1BAxT21Ct9PjOmg88Rp8veBKGksmRo0l1eYg4qg1Rd6WgOiIuRoyK+JMPgZqLNW2AuxFr7af8siqTVzwaZ4IV8RKS8eclmfQ9O0A+mZi/nTkgOLA8aBVB3s3ehWL15eRtjB6klLrwCZF3C8QJrtgnMuNIu8XcKmMhk5q5KIPRuxMx9J9XXF07KXmdphQq0KZ5Wfw50fOCKPvwbwbK6Agei4AtTgL5M5Pfg4IDl63HPRwXjBHdSOYwBjawxEKv1bB7a6g8fZdy9mCJoZVuhNH7uIi7KRGQ6xUyzHDbO4DFO88uRMAGqEdK2KSHe58giaaik5O2FMaPNmEhM8vCmv3A+RWcbkdEJCTpv0Ewm7OOfN7j7kKDzjpi5JOlGV3ew3L02PH6lrFch6pSO1J1nL5YdEorsg+THzfMndcA0vuRz5gip1C3E396FLi0B4AbxaEmyYZgHag99CMst+ff2MpeyEGN3fPkcD/cJTgx6MEe58BSfDlKMHe30ES/PkowV+OEvz1KMH339V4nPW26Yhx26JY4v6/BhflmMrh6wWHG7TWv1piKDzwNNLeEOrixvy9n4Gu9508nQxgnF2pL1R3zss2ltp5v5nP1PGsRPDfSMedbvUzN0N5HuzBPFYFQEYzYjU9nTgjXHSw9+ky6SFjq3BUqXHxJuqUMSYo9+tiFE6NV2em46fgyISYT0meUMzcRs2j5T4louEmx4kyK0sTO8GXwonuMZvzRtCh0kivoH9YzuNiLtEnJEli+B/7eeAoOmpp6UQ2F4Z2v1PmznlNXk1pONczrIUFfBQTI5EZGi2V2kLditT+gq+vDdwJgDfpDwLU3AlLKlEDNeVOI5PTJAVRWirQlgMiQJO3QlswEuXQfVpp76BlD8k+VsbpyuhsA6O6K5fXVwTKI1ewoN1FuVTWypHhAFRBLUxqpDZp9bJLeXFxHBuoEG4hi82lhxmTAGQt9JIrq1CtUMNFeDhRYjzULXdAX7vNn6AKWBgvLpZiB63FriMzLatHrsOGkQUVtQNXQsO4wUQ/lf0dQ1nNYK7GwkS44JZERXr8TLpZL0HWM4In04gI0LMIZ2xNzVbkZHsh9ZEbayWovTKFdCdc2NoJbPjqGWwM1bd+urRTdI3t60pFsKj0VxGxRqW/ShflBa4ujHKuenFIsK/WVooeV2srFHbzWpJ1sFxK0qHy1bNnqHWs4uzV3VjFqLnbqvi4rgcHjFUYh7asclgHPDgsvM4gGOACG4ek7EXIywhdDz6Ngia4cNG5SwwVFzrgrHtvrKTPceGwIRmN7aepxMWFeWTFWNHrKGkXgRFxAVeYwk9I9If5YQ6Ljz7u5Sw1H8dSK9zH5Qw6p7iloC3d51OGpag4ijbz5AtbrzPVUOxbRBa2iuCxlIqoAT5mv3H9JY/oDhQOfBMa3hRmb2tUyhngtTifAeaPiyFbtq5czvNPhA/Fm+F/X/q8Z+nOO3OFQyja+bS0V7piHWdyHIFZWZGmuoPicg8hwEc36S2wrZMUWUtd4FFz40KEX6zaBskttVdL4K/tq6hw2xzyxGuR2tS4ig6pzFCJdIanDjnrvqq9pqabznvUgm6FvromTEdwqHt2YVGbHLUadiP6iI5gNy+pee95RL9OiWigY3F7miPxL0+KDa7glrvIKmcN01upwSgpnCuEgp24hgsdR6yMoVP5rE/liVE85KzJqaIAVn+Jx949oyPCvW95XbcL2AG0AFvnp2/od1yPvE1fCZ4aj3zPvLTfv+JbiQJMaS41t9fSyQYUm42VE+jaUTc8dvgJrJYEziw1aFYhbyiL6CVUBuZOkgvXyWJAxphHoORgebMZIdcHJw8PyZ7ePYs1UMPVcEZHvG90EB/66Op0McLWq9btGOHWfSnX44PCxZvzt1tl+nrxKA2zihirgU6oKU/0jawUqHEdsLJOs8ZJU/eFsAHYi5mL7+EVa5hE9PRyUTlJLkh1FUsq+joDMhgyYHzTIm5+8V5UFJEe8GgdvMPja0kPq1wzgUcG/BTuGfCTuAVmDPTSvlz+omCNySqwHbV06ed3Pu3IacvH1IXYSjkCFc9rvKF9AHPJCbi420asPTLoho0IbObpVknDFdEiFh8oUJZr1yqOoXAJhVwMhWNxBErKZA9ApvS1InAuNhVSXCf+x9IXlkxKZm+Ne7AFNgj+OiOuTC7CS3TfIzaGBH66pS9/6/6gwshylP1e27MTt7PvsaiBgmD6qix0Vei2V+RMR95xe/Wdm8VHjCt3GIhRvFAneg3kBfay5TUIc+JvoW2oAeJjWxwu0YOC03UDq5e1lZuKj1fASkRjb1K3ES7ZNmL+DV0eu76/W6qztc/31sZHVe69b1jvydYO7MFVwybreO9NU6p5xDwzuiG1bHMXbSfndky4CP6mHrl4ISdN3TRRlcdbEfC+KxVhRNb0Qh3BTvyPeRSDwh+bQXSWl0yJerGGrP03eIsQs4QfF7PQ1shxtkBA60T7RU6H/Idp+BkLi01oOGK4UOYRXuFhJzjQfv6OA++nd7wHY8lAzeBu44TTkt4vD4jOUKYnvQyMsFUeKmerZks0FS5s5qXWKoGlpY8mPHbV6bqRH9G4sZeNCuKXBzNhqb2H4jRUozZ5O9o4/2JWysVunIx84ntZc4dylJcaGJPixH2aiYxwhv0VmwOmHBNv1HHOX4Kk5lFNigi37CmGoGot9MWqmmcBccTlMzBRYTkrTv2kSGaB+mCQ0vxM3Z3ivC+luUH2VZ25tjP1F3QIX7dHfOUt6XkO3O4+8sxzvQy4kcPbH2Kgz/nu6dVbcLUvX+7gFaXjJX0/C36sP3Mlur0gO0zVakm7Y1TwQbODD5NokoJbqeOvpMT5R2yqOcG2pWORXD7THWyYFwhhQG7IlPUrwRkLChnaRGgxT768j/lKmqV1T5sdEN3kcLGq9iH1w2fB0tfuU62twmr7fpttvrHYoB9hx+9eGbXQS309QGHm9iNo/FdAsc/JQ9fI6CsDhigNJtVdkcla3YOXpFlxfPgAZ3JSI5TDw+NuZoNF822fa147q7qlTPcJ7sN0JU1LdaJ9AGFJNaBSOCyhD2T8q33N2zk8NgI7yGKvSJ/Yj1++fEd+59aCrnnqtGNAv3V6YHgWv2eW1df4O9inwrMh4UNbAAobaoIHMrIRwYFX0fcs1W+j72lCqxOaZJ//CibqcBot0CDzaFtGNrxrrUw7h4VRlPG7L/m+v5ZHkt4JBpwIPtcRdtGXRcq18tZLWauRUTdmrTZuV1z9dwqSFLiK60Zx5Kmz3/kJOnfD41+Mr+8vo1J+zRPGh/x80kV8UnWq+9bGirv/xiHCfQ/WeflcG+HCWOocVUv3p+BpR+GOoXj5M/iDjY1HPrq0u7E+4itQB9MT0bPNj2A6mO3ISVMRxi9fN1xoDnwKt55h/ebIwYVN0tQ/Bz9Is73JPsiTWJSql2wJMY7wBCNbneZbbDRe8aMm30OvNR52uVaeibdEtr8Ds6HSQvb/0MA7F8COIeyQC3MSnlC2xfORpDxbvMMmv9JfCn4SESc/VfuI94tfnLHfwXs696WZoR3Y+6CWs5fi16GPDLMwvBfF3nuEx/+bDA5s5vYZMj+q7zZDf//AIRlVdtaw1tGRVR7PIIWFN4v/OloMxqWyooWuBlaV+SK8uQqbqAE/hU6ym0ds6WiSLR+B8MnsI/33QJPhpkPMb7C+8Z7a2fin5AjPOfVIzmSKwVkWP76G1xmcQ5TOBz0XfgUPMvH8uAxqqe4TQcNz8NrKn3tCm0ULaiu8fB+iuTiv6v0fvB1BO4lfZ2kpgTcG0EGXeaj3pEFz0GAGOeKQfqF9Dp/2KdTzHfKmJtcO/p5mUCMFUYOmBnNy6RtZn0CAsJqXf6eAvpGWtxWoBUEUaL9UCKiZ2/DPk5V/Xpq+kfVTEG5I4R/Mu9UyMLVjacvuqoW+8Xvg/XDTjxoG/9mmagJtyUSVchM5KErE9GEirdtaJ1s/yjY69YfmeVaiwMv5XwAAAP//WmxoCA==" } diff --git a/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Cisco-ASA-2.golden.json b/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Cisco-ASA-2.golden.json index 05db12395794..dc73be6acf35 100644 --- a/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Cisco-ASA-2.golden.json +++ b/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Cisco-ASA-2.golden.json @@ -6,6 +6,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 763, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -51,6 +52,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 844, "community_id": "1:XaNCBbXLPvRPq4YmlYj+3C8LbyE=", "direction": "unknown", "iana_number": 6, @@ -60,6 +62,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 81, "ip": "192.168.0.2", "locality": "private", "port": 61775 @@ -73,6 +76,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 6207, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -118,6 +122,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 6288, "community_id": "1:ApLoUXZvqTmJTtS6gao5Sqg0kgQ=", "direction": "unknown", "iana_number": 6, @@ -127,6 +132,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 81, "ip": "192.168.0.2", "locality": "private", "port": 61776 @@ -140,6 +146,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 6207, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -185,6 +192,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 6288, "community_id": "1:ApLoUXZvqTmJTtS6gao5Sqg0kgQ=", "direction": "unknown", "iana_number": 6, @@ -194,6 +202,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 81, "ip": "192.168.0.2", "locality": "private", "port": 61776 @@ -207,6 +216,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 9075, "ip": "192.168.0.18", "locality": "private", "port": 80 @@ -252,6 +262,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 9156, "community_id": "1:64faG50xtU56JMAADXSJ0Lro5iE=", "direction": "unknown", "iana_number": 6, @@ -261,6 +272,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 81, "ip": "192.168.0.1", "locality": "private", "port": 56635 @@ -274,6 +286,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 9075, "ip": "192.168.0.18", "locality": "private", "port": 80 @@ -319,6 +332,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 9156, "community_id": "1:64faG50xtU56JMAADXSJ0Lro5iE=", "direction": "unknown", "iana_number": 6, @@ -328,6 +342,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 81, "ip": "192.168.0.1", "locality": "private", "port": 56635 @@ -341,6 +356,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 5536, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -386,6 +402,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 5617, "community_id": "1:8hx//bjfEFu4sYomYN8bh9DeMaQ=", "direction": "unknown", "iana_number": 6, @@ -395,6 +412,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 81, "ip": "192.168.0.2", "locality": "private", "port": 61773 @@ -408,6 +426,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 5536, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -453,6 +472,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 5617, "community_id": "1:8hx//bjfEFu4sYomYN8bh9DeMaQ=", "direction": "unknown", "iana_number": 6, @@ -462,6 +482,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 81, "ip": "192.168.0.2", "locality": "private", "port": 61773 @@ -543,6 +564,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 14179, "ip": "192.168.0.18", "locality": "private", "port": 80 @@ -588,6 +610,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 14248, "community_id": "1:IZ8RrSqt8oeb2F2Rp9296zm54bc=", "direction": "unknown", "iana_number": 6, @@ -597,6 +620,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 69, "ip": "192.168.0.1", "locality": "private", "port": 56649 @@ -610,6 +634,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 14179, "ip": "192.168.0.18", "locality": "private", "port": 80 @@ -655,6 +680,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 14248, "community_id": "1:IZ8RrSqt8oeb2F2Rp9296zm54bc=", "direction": "unknown", "iana_number": 6, @@ -664,6 +690,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 69, "ip": "192.168.0.1", "locality": "private", "port": 56649 @@ -745,6 +772,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 14178, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -790,6 +818,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 14247, "community_id": "1:E1vNamQGw5X+X+vT1g7ui6Nc3O0=", "direction": "unknown", "iana_number": 6, @@ -799,6 +828,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 69, "ip": "192.168.0.2", "locality": "private", "port": 61777 @@ -812,6 +842,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 14178, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -857,6 +888,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 14247, "community_id": "1:E1vNamQGw5X+X+vT1g7ui6Nc3O0=", "direction": "unknown", "iana_number": 6, @@ -866,6 +898,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 69, "ip": "192.168.0.2", "locality": "private", "port": 61777 @@ -947,6 +980,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 881, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -992,6 +1026,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 956, "community_id": "1:pkwcoe/zjCLerUgj+HGAwwt4wV8=", "direction": "unknown", "iana_number": 6, @@ -1001,6 +1036,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 75, "ip": "192.168.0.1", "locality": "private", "port": 56650 @@ -1014,6 +1050,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 881, "ip": "192.168.0.17", "locality": "private", "port": 80 @@ -1059,6 +1096,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 956, "community_id": "1:pkwcoe/zjCLerUgj+HGAwwt4wV8=", "direction": "unknown", "iana_number": 6, @@ -1068,6 +1106,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 75, "ip": "192.168.0.1", "locality": "private", "port": 56650 @@ -1149,6 +1188,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 14178, "ip": "192.168.0.18", "locality": "private", "port": 80 @@ -1194,6 +1234,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 14247, "community_id": "1:35/w0D/WO1QvBp8O+Vd95Nb+tt4=", "direction": "unknown", "iana_number": 6, @@ -1203,6 +1244,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 69, "ip": "192.168.0.1", "locality": "private", "port": 56651 @@ -1216,6 +1258,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 14178, "ip": "192.168.0.18", "locality": "private", "port": 80 @@ -1261,6 +1304,7 @@ "type": "netflow_flow" }, "network": { + "bytes": 14247, "community_id": "1:35/w0D/WO1QvBp8O+Vd95Nb+tt4=", "direction": "unknown", "iana_number": 6, @@ -1270,6 +1314,7 @@ "ip": "192.0.2.1" }, "source": { + "bytes": 69, "ip": "192.168.0.1", "locality": "private", "port": 56651 diff --git a/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Huawei-Netstream.golden.json b/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Huawei-Netstream.golden.json index 223e3bc161d1..99db67a6ed81 100644 --- a/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Huawei-Netstream.golden.json +++ b/x-pack/filebeat/input/netflow/testdata/golden/Netflow-9-Huawei-Netstream.golden.json @@ -6,6 +6,7 @@ "Meta": null, "Fields": { "destination": { + "bytes": 0, "ip": "10.111.112.204", "locality": "private", "port": 2598 diff --git a/x-pack/filebeat/input/netflow/testdata/golden/ipfix_cisco.pcap.golden.json b/x-pack/filebeat/input/netflow/testdata/golden/ipfix_cisco.pcap.golden.json index 08f146cddcb3..2ff196e79506 100644 --- a/x-pack/filebeat/input/netflow/testdata/golden/ipfix_cisco.pcap.golden.json +++ b/x-pack/filebeat/input/netflow/testdata/golden/ipfix_cisco.pcap.golden.json @@ -5,9 +5,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 719, + "packets": 5 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -56,13 +64,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 719, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 5, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 719, + "packets": 5 } }, "Private": null, @@ -72,9 +90,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 1477, + "packets": 6 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -123,13 +149,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 1477, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 6, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 1477, + "packets": 6 } }, "Private": null, @@ -139,9 +175,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 1, + "packets": 1 + }, + "destination": { + "bytes": 0, + "packets": 1 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -190,13 +234,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 1, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 2, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 1 + }, + "source": { + "bytes": 1, + "packets": 1 } }, "Private": null, @@ -206,9 +260,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 108580, + "packets": 79 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -257,13 +319,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 108580, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 79, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 108580, + "packets": 79 } }, "Private": null, @@ -273,9 +345,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 342, + "packets": 5 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -324,13 +404,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 342, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 5, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 342, + "packets": 5 } }, "Private": null, @@ -340,9 +430,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 1851, + "packets": 17 + }, + "destination": { + "bytes": 9437, + "packets": 18 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -391,13 +489,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 11288, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 35, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 9437, + "packets": 18 + }, + "source": { + "bytes": 1851, + "packets": 17 } }, "Private": null, @@ -407,9 +515,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 51480, + "packets": 39 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -458,13 +574,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 51480, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 39, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 51480, + "packets": 39 } }, "Private": null, @@ -474,9 +600,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 5135, + "packets": 55 + }, + "destination": { + "bytes": 36894, + "packets": 47 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -525,13 +659,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 42029, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 102, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 36894, + "packets": 47 + }, + "source": { + "bytes": 5135, + "packets": 55 } }, "Private": null, @@ -541,9 +685,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 6533, + "packets": 14 + }, + "destination": { + "bytes": 6400, + "packets": 20 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -592,13 +744,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 12933, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 34, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 6400, + "packets": 20 + }, + "source": { + "bytes": 6533, + "packets": 14 } }, "Private": null, @@ -608,9 +770,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 5684, + "packets": 491 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -659,13 +829,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 5684, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 491, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 5684, + "packets": 491 } }, "Private": null, @@ -675,9 +855,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 4965, + "packets": 13 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -726,13 +914,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 4965, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 13, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 4965, + "packets": 13 } }, "Private": null, @@ -742,9 +940,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 138, + "packets": 4 + }, + "destination": { + "bytes": 0, + "packets": 2 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -793,13 +999,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 138, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 6, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 2 + }, + "source": { + "bytes": 138, + "packets": 4 } }, "Private": null, @@ -809,9 +1025,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 1, + "packets": 1 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -860,13 +1084,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 1, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 1, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 1, + "packets": 1 } }, "Private": null, @@ -876,9 +1110,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 6079, + "packets": 10 + }, + "destination": { + "bytes": 1571, + "packets": 13 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -927,13 +1169,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 7650, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 23, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 1571, + "packets": 13 + }, + "source": { + "bytes": 6079, + "packets": 10 } }, "Private": null, @@ -943,9 +1195,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 2807, + "packets": 6 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -994,13 +1254,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 2807, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 6, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 2807, + "packets": 6 } }, "Private": null, @@ -1010,9 +1280,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 0, + "packets": 1 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1061,13 +1339,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 0, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 1, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 0, + "packets": 1 } }, "Private": null, @@ -1077,9 +1365,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 1877, + "packets": 11 + }, + "destination": { + "bytes": 3409, + "packets": 7 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1128,13 +1424,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 5286, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 18, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 3409, + "packets": 7 + }, + "source": { + "bytes": 1877, + "packets": 11 } }, "Private": null, @@ -1144,9 +1450,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 2255, + "packets": 7 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1195,13 +1509,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 2255, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 7, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 2255, + "packets": 7 } }, "Private": null, @@ -1211,9 +1535,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 538, + "packets": 5 + }, + "destination": { + "bytes": 0, + "packets": 0 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1262,13 +1594,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 538, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 5, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 0 + }, + "source": { + "bytes": 538, + "packets": 5 } }, "Private": null, @@ -1278,9 +1620,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 1487, + "packets": 21 + }, + "destination": { + "bytes": 6305, + "packets": 15 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1329,13 +1679,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 7792, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 36, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 6305, + "packets": 15 + }, + "source": { + "bytes": 1487, + "packets": 21 } }, "Private": null, @@ -1345,9 +1705,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 3110, + "packets": 7 + }, + "destination": { + "bytes": 1973, + "packets": 10 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1396,13 +1764,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 5083, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 17, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 1973, + "packets": 10 + }, + "source": { + "bytes": 3110, + "packets": 7 } }, "Private": null, @@ -1412,9 +1790,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 2, + "packets": 4 + }, + "destination": { + "bytes": 2, + "packets": 4 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1463,13 +1849,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 4, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 8, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 2, + "packets": 4 + }, + "source": { + "bytes": 2, + "packets": 4 } }, "Private": null, @@ -1479,9 +1875,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 2, + "packets": 2 + }, + "destination": { + "bytes": 0, + "packets": 2 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1530,13 +1934,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 2, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 4, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 2 + }, + "source": { + "bytes": 2, + "packets": 2 } }, "Private": null, @@ -1546,9 +1960,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 0, + "packets": 4 + }, + "destination": { + "bytes": 0, + "packets": 2 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1597,13 +2019,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 0, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 6, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 2 + }, + "source": { + "bytes": 0, + "packets": 4 } }, "Private": null, @@ -1613,9 +2045,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 1005, + "packets": 4 + }, + "destination": { + "bytes": 174, + "packets": 3 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1664,13 +2104,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 1179, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 7, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 174, + "packets": 3 + }, + "source": { + "bytes": 1005, + "packets": 4 } }, "Private": null, @@ -1680,9 +2130,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 138, + "packets": 4 + }, + "destination": { + "bytes": 0, + "packets": 2 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1731,13 +2189,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 138, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 6, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 2 + }, + "source": { + "bytes": 138, + "packets": 4 } }, "Private": null, @@ -1747,9 +2215,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 31, + "packets": 2 + }, + "destination": { + "bytes": 0, + "packets": 1 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1798,13 +2274,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 31, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 3, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 0, + "packets": 1 + }, + "source": { + "bytes": 31, + "packets": 2 } }, "Private": null, @@ -1814,9 +2300,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 13482, + "packets": 17 + }, + "destination": { + "bytes": 8989, + "packets": 19 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1865,13 +2359,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 22471, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 36, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 8989, + "packets": 19 + }, + "source": { + "bytes": 13482, + "packets": 17 } }, "Private": null, @@ -1881,9 +2385,17 @@ "Timestamp": "2018-07-03T10:47:00Z", "Meta": null, "Fields": { + "client": { + "bytes": 28373, + "packets": 133 + }, + "destination": { + "bytes": 233345, + "packets": 236 + }, "event": { "action": "netflow_flow", - "category": "network_traffic", + "category": "network_session", "created": "2018-07-03T10:47:00Z", "kind": "event" }, @@ -1932,13 +2444,23 @@ "waasoptimization_segment": 16 }, "network": { + "bytes": 261718, "community_id": "1:idwO/QHAjbcGlF1bfQE9dPuu7T0=", "direction": "unknown", "iana_number": 6, + "packets": 369, "transport": "tcp" }, "observer": { "ip": "10.101.255.2" + }, + "server": { + "bytes": 233345, + "packets": 236 + }, + "source": { + "bytes": 28373, + "packets": 133 } }, "Private": null, diff --git a/x-pack/filebeat/input/s3/input.go b/x-pack/filebeat/input/s3/input.go index 3ef20ee1041d..eef16d918fae 100644 --- a/x-pack/filebeat/input/s3/input.go +++ b/x-pack/filebeat/input/s3/input.go @@ -232,6 +232,7 @@ func (p *s3Input) Wait() { func (p *s3Input) processor(queueURL string, messages []sqs.Message, visibilityTimeout int64, svcS3 s3iface.ClientAPI, svcSQS sqsiface.ClientAPI) { var wg sync.WaitGroup numMessages := len(messages) + p.logger.Debugf("Processing %v messages", numMessages) wg.Add(numMessages * 2) // process messages received from sqs @@ -251,14 +252,16 @@ func (p *s3Input) processMessage(svcS3 s3iface.ClientAPI, message sqs.Message, w p.logger.Error(errors.Wrap(err, "handleSQSMessage failed")) return } + p.logger.Debugf("handleSQSMessage succeed and returned %v sets of S3 log info", len(s3Infos)) // read from s3 object and create event for each log line err = p.handleS3Objects(svcS3, s3Infos, errC) if err != nil { err = errors.Wrap(err, "handleS3Objects failed") p.logger.Error(err) - errC <- err + return } + p.logger.Debugf("handleS3Objects succeed") } func (p *s3Input) processorKeepAlive(svcSQS sqsiface.ClientAPI, message sqs.Message, queueURL string, visibilityTimeout int64, wg *sync.WaitGroup, errC chan error) { @@ -288,13 +291,14 @@ func (p *s3Input) processorKeepAlive(svcSQS sqsiface.ClientAPI, message sqs.Mess } return case <-time.After(time.Duration(visibilityTimeout/2) * time.Second): + p.logger.Warn("Half of the set visibilityTimeout passed, visibility timeout needs to be updated") // If half of the set visibilityTimeout passed and this is // still ongoing, then change visibility timeout. err := p.changeVisibilityTimeout(queueURL, visibilityTimeout, svcSQS, message.ReceiptHandle) if err != nil { p.logger.Error(errors.Wrap(err, "change message visibility failed")) } - p.logger.Infof("Message visibility timeout updated to %v", visibilityTimeout) + p.logger.Infof("Message visibility timeout updated to %v seconds", visibilityTimeout) } } } @@ -370,8 +374,11 @@ func (p *s3Input) handleS3Objects(svc s3iface.ClientAPI, s3Infos []s3Info, errC // read from s3 object reader, err := p.newS3BucketReader(svc, s3Info) if err != nil { - return errors.Wrap(err, "newS3BucketReader failed") + err = errors.Wrap(err, "newS3BucketReader failed") + s3Context.setError(err) + return err } + if reader == nil { continue } @@ -382,7 +389,7 @@ func (p *s3Input) handleS3Objects(svc s3iface.ClientAPI, s3Infos []s3Info, errC err := p.decodeJSONWithKey(decoder, objectHash, s3Info, s3Context) if err != nil { err = errors.Wrapf(err, "decodeJSONWithKey failed for %v", s3Info.key) - s3Context.Fail(err) + s3Context.setError(err) return err } return nil @@ -403,12 +410,14 @@ func (p *s3Input) handleS3Objects(svc s3iface.ClientAPI, s3Infos []s3Info, errC err = p.forwardEvent(event) if err != nil { err = errors.Wrapf(err, "forwardEvent failed for %v", s3Info.key) - s3Context.Fail(err) + s3Context.setError(err) return err } return nil } else if err != nil { - return errors.Wrapf(err, "ReadString failed for %v", s3Info.key) + err = errors.Wrapf(err, "ReadString failed for %v", s3Info.key) + s3Context.setError(err) + return err } // create event per log line @@ -417,7 +426,7 @@ func (p *s3Input) handleS3Objects(svc s3iface.ClientAPI, s3Infos []s3Info, errC err = p.forwardEvent(event) if err != nil { err = errors.Wrapf(err, "forwardEvent failed for %v", s3Info.key) - s3Context.Fail(err) + s3Context.setError(err) return err } } @@ -610,11 +619,6 @@ func s3ObjectHash(s3Info s3Info) string { return prefix[:10] } -func (c *s3Context) Fail(err error) { - c.setError(err) - c.done() -} - func (c *s3Context) setError(err error) { // only care about the last error for now // TODO: add "Typed" error to error for context diff --git a/x-pack/filebeat/module/cisco/_meta/kibana/7/dashboard/Filebeat-Cisco-ASA.json b/x-pack/filebeat/module/cisco/_meta/kibana/7/dashboard/Filebeat-Cisco-ASA.json index 5d50368c9f29..7a585fbf501a 100644 --- a/x-pack/filebeat/module/cisco/_meta/kibana/7/dashboard/Filebeat-Cisco-ASA.json +++ b/x-pack/filebeat/module/cisco/_meta/kibana/7/dashboard/Filebeat-Cisco-ASA.json @@ -764,7 +764,7 @@ "id": "2", "params": { "customLabel": "ACL ID", - "field": "cisco.asa.list_id", + "field": "cisco.asa.rule_name", "missingBucket": false, "missingBucketLabel": "Missing", "order": "desc", @@ -878,7 +878,7 @@ "params": { "aggregate": "concat", "customLabel": "Sample message", - "field": "log.original", + "field": "event.original", "size": 1, "sortField": "@timestamp", "sortOrder": "desc" diff --git a/x-pack/functionbeat/functionbeat.reference.yml b/x-pack/functionbeat/functionbeat.reference.yml index e04b3de57b6e..61ebd49d0f82 100644 --- a/x-pack/functionbeat/functionbeat.reference.yml +++ b/x-pack/functionbeat/functionbeat.reference.yml @@ -1286,6 +1286,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/x-pack/functionbeat/magefile.go b/x-pack/functionbeat/magefile.go index 6a10ee93f30f..ab3432c720a5 100644 --- a/x-pack/functionbeat/magefile.go +++ b/x-pack/functionbeat/magefile.go @@ -8,6 +8,7 @@ package main import ( "fmt" + "os" "path/filepath" "time" @@ -147,6 +148,36 @@ func GoTestUnit() { mg.Deps(unittest.GoUnitTest) } +// BuildPkgForFunctions creates a folder named pkg and adds functions to it. +// This makes testing the manager more comfortable. +func BuildPkgForFunctions() error { + mg.Deps(Update, Build) + + err := os.MkdirAll("pkg", 700) + if err != nil { + return err + } + + filesToCopy := map[string]string{ + filepath.Join("provider", "aws", "functionbeat-aws"): filepath.Join("pkg", "functionbeat-aws"), + filepath.Join("provider", "gcp", "pubsub", "pubsub.go"): filepath.Join("pkg", "pubsub", "pubsub.go"), + filepath.Join("provider", "gcp", "storage", "storage.go"): filepath.Join("pkg", "storage", "storage.go"), + filepath.Join("provider", "gcp", "build", "pubsub", "vendor"): filepath.Join("pkg", "pubsub", "vendor"), + filepath.Join("provider", "gcp", "build", "storage", "vendor"): filepath.Join("pkg", "storage", "vendor"), + } + for src, dest := range filesToCopy { + c := &devtools.CopyTask{ + Source: src, + Dest: dest, + } + err = c.Execute() + if err != nil { + return err + } + } + return nil +} + // BuildSystemTestBinary build a binary for testing that is instrumented for // testing and measuring code coverage. The binary is only instrumented for // coverage when TEST_COVERAGE=true (default is false). diff --git a/x-pack/metricbeat/include/list.go b/x-pack/metricbeat/include/list.go index 0625c8daf96d..e1e6943c058a 100644 --- a/x-pack/metricbeat/include/list.go +++ b/x-pack/metricbeat/include/list.go @@ -28,6 +28,9 @@ import ( _ "github.com/elastic/beats/x-pack/metricbeat/module/coredns/stats" _ "github.com/elastic/beats/x-pack/metricbeat/module/googlecloud" _ "github.com/elastic/beats/x-pack/metricbeat/module/googlecloud/stackdriver" + _ "github.com/elastic/beats/x-pack/metricbeat/module/ibmmq" + _ "github.com/elastic/beats/x-pack/metricbeat/module/istio" + _ "github.com/elastic/beats/x-pack/metricbeat/module/istio/mesh" _ "github.com/elastic/beats/x-pack/metricbeat/module/mssql" _ "github.com/elastic/beats/x-pack/metricbeat/module/mssql/performance" _ "github.com/elastic/beats/x-pack/metricbeat/module/mssql/transaction_log" diff --git a/x-pack/metricbeat/metricbeat.reference.yml b/x-pack/metricbeat/metricbeat.reference.yml index d9f5c41f435a..c5ab17ef96d9 100644 --- a/x-pack/metricbeat/metricbeat.reference.yml +++ b/x-pack/metricbeat/metricbeat.reference.yml @@ -73,12 +73,13 @@ metricbeat.modules: #- fsstat # File system summary metrics #- raid # Raid #- socket # Sockets and connection info (linux only) + #- service # systemd service information enabled: true period: 10s processes: ['.*'] # Configure the metric types that are included by these metricsets. - cpu.metrics: ["percentages"] # The other available options are normalized_percentages and ticks. + cpu.metrics: ["percentages","normalized_percentages"] # The other available option is ticks. core.metrics: ["percentages"] # The other available option is ticks. # A list of filesystem types to ignore. The filesystem metricset will not @@ -131,6 +132,9 @@ metricbeat.modules: # Diskio configurations #diskio.include_devices: [] + # Filter systemd services by status or sub-status + #service.state_filter: [] + #------------------------------- Activemq Module ------------------------------- - module: activemq metricsets: ['broker', 'queue', 'topic'] @@ -503,6 +507,42 @@ metricbeat.modules: # fields: # added to the the response in root. overwrites existing fields # key: "value" +#-------------------------------- IBM MQ Module -------------------------------- +- module: ibmmq + metricsets: ['qmgr'] + period: 10s + hosts: ['localhost:9157'] + + # This module uses the Prometheus collector metricset, all + # the options for this metricset are also available here. + metrics_path: /metrics + + # The custom processor is responsible for filtering Prometheus metrics + # not stricly related to the IBM MQ domain, e.g. system load, process, + # metrics HTTP server. + processors: + - script: + lang: javascript + source: > + function process(event) { + var metrics = event.Get("prometheus.metrics"); + Object.keys(metrics).forEach(function(key) { + if (!(key.match(/^ibmmq_.*$/))) { + event.Delete("prometheus.metrics." + key); + } + }); + metrics = event.Get("prometheus.metrics"); + if (Object.keys(metrics).length == 0) { + event.Cancel(); + } + } + +#-------------------------------- Istio Module -------------------------------- +- module: istio + metricsets: ["mesh"] + period: 10s + hosts: ["localhost:42422"] + #------------------------------- Jolokia Module ------------------------------- - module: jolokia #metricsets: ["jmx"] @@ -935,10 +975,9 @@ metricbeat.modules: metricsets: - query period: 10s - hosts: ["localhost"] + hosts: ["user=myuser password=mypassword dbname=mydb sslmode=disable"] driver: "postgres" - datasource: "user=myuser password=mypassword dbname=mydb sslmode=disable" sql_query: "select now()" @@ -2238,6 +2277,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. diff --git a/x-pack/metricbeat/module/aws/_meta/config.yml b/x-pack/metricbeat/module/aws/_meta/config.yml index 513719e56815..0fa8436d5cd7 100644 --- a/x-pack/metricbeat/module/aws/_meta/config.yml +++ b/x-pack/metricbeat/module/aws/_meta/config.yml @@ -17,12 +17,14 @@ - module: aws period: 5m metricsets: + - dynamodb - ebs - ec2 - elb + - lambda + - rds - sns - sqs - - rds - module: aws period: 12h metricsets: diff --git a/x-pack/metricbeat/module/aws/_meta/docs.asciidoc b/x-pack/metricbeat/module/aws/_meta/docs.asciidoc index 0b3565a001ab..1dd8c2aaf12f 100644 --- a/x-pack/metricbeat/module/aws/_meta/docs.asciidoc +++ b/x-pack/metricbeat/module/aws/_meta/docs.asciidoc @@ -4,7 +4,7 @@ This module periodically fetches monitoring metrics from AWS CloudWatch using https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html[GetMetricData API] for AWS services. Note: extra AWS charges on GetMetricData API requests will be generated by this module. -The default metricsets are `ec2`, `sqs`, `s3_request`, `s3_daily_storage`, `cloudwatch` and `rds`. +All metrics are enabled by default. [float] == Module-specific configuration notes @@ -26,8 +26,12 @@ image::./images/metricbeat-aws-overview.png[] [float] == Metricsets -Currently, we have `ec2`, `sqs`, `s3_request`, `s3_daily_storage`, `cloudwatch`, `billing`,`ebs`, `elb`, `rds`, `sns`, `sqs` and `usage` metricset in `aws` module. -Collecting `tags` for `ec2`, `cloudwatch`, `ebs` and `elb` metricset is supported. +Currently, we have `billing`, `cloudwatch`, `dynamodb`, `ebs`, `ec2`, `elb`, +`lambda`, `rds`, `s3_daily_storage`, `s3_request`, `sns`, `sqs` and `usage` +metricset in `aws` module. + +Collecting `tags` for `ec2`, `cloudwatch`, and metricset created based on +`cloudwatch` using light module is supported. * *tags.*: Tag key value pairs from aws resources. A tag is a label that user assigns to an AWS resource. @@ -105,7 +109,7 @@ GetMetricData max page size: 100, based on https://docs.aws.amazon.com/AmazonClo | CloudWatch ListMetrics | Total number of results / ListMetrics max page size | Per region per namespace per collection period | CloudWatch GetMetricData | Total number of results / GetMetricData max page size | Per region per namespace per collection period |=== -`billing`, `ebs`, `elb`, `sns` and `usage` are the same as `cloudwatch` metricset. +`billing`, `ebs`, `elb`, `sns`, `usage` and `lambda` are the same as `cloudwatch` metricset. [float] === `ec2` diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-billing-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-billing-overview.json index 7fb9ee9b8860..be1b2935b027 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-billing-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-billing-overview.json @@ -32,7 +32,7 @@ "panelIndex": "89dccfe8-a25e-44ea-afdb-ff01ab1f05d6", "panelRefName": "panel_0", "title": "AWS Account Filter", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -48,7 +48,7 @@ "panelIndex": "26670498-b079-4447-bbc8-e4ca8215898c", "panelRefName": "panel_1", "title": "Estimated Billing Chart", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -56,15 +56,15 @@ }, "gridData": { "h": 11, - "i": "04159643-33c0-4f01-80f1-fb8d212ed959", + "i": "221aab02-2747-4d84-9dde-028ccd51bdce", "w": 16, "x": 0, "y": 5 }, - "panelIndex": "04159643-33c0-4f01-80f1-fb8d212ed959", + "panelIndex": "221aab02-2747-4d84-9dde-028ccd51bdce", "panelRefName": "panel_2", "title": "Total Estimated Charges", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -80,7 +80,7 @@ "panelIndex": "21e91e6b-0ff0-42ba-9132-6f30c5c6bbb7", "panelRefName": "panel_3", "title": "Top 5 Estimated Billing Per Service Name", - "version": "7.5.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -114,8 +114,8 @@ } ], "type": "dashboard", - "updated_at": "2019-12-02T18:54:47.255Z", - "version": "WzU1NywyXQ==" + "updated_at": "2020-01-17T15:11:20.337Z", + "version": "WzY2MiwxXQ==" }, { "attributes": { @@ -140,7 +140,7 @@ "fieldName": "cloud.account.name", "id": "1549397251041", "indexPatternRefName": "control_0_index_pattern", - "label": "AWS account", + "label": "account name", "options": { "dynamicOptions": true, "multiselect": true, @@ -172,8 +172,8 @@ } ], "type": "visualization", - "updated_at": "2019-12-02T18:50:52.450Z", - "version": "WzU1MywyXQ==" + "updated_at": "2020-01-17T14:43:10.917Z", + "version": "WzE1NywxXQ==" }, { "attributes": { @@ -295,8 +295,8 @@ } ], "type": "visualization", - "updated_at": "2019-12-02T18:53:50.818Z", - "version": "WzU1NSwyXQ==" + "updated_at": "2020-01-17T14:43:00.631Z", + "version": "WzU0LDFd" }, { "attributes": { @@ -329,7 +329,7 @@ "id": "ebb52700-1531-11ea-961e-c1db9cc6166e" } ], - "default_index_pattern": "filebeat-*", + "default_index_pattern": "metricbeat-*", "default_timefield": "@timestamp", "drop_last_bucket": 1, "gauge_color_rules": [ @@ -352,7 +352,7 @@ "fill": 0.5, "filter": { "language": "kuery", - "query": "aws.dimensions.ServiceName :* " + "query": "not aws.dimensions.ServiceName : * " }, "formatter": "number", "id": "61ca57f1-469d-11e7-af02-69e470af7417", @@ -365,16 +365,20 @@ "type": "sum" } ], + "override_index_pattern": 1, "point_size": 1, "separate_axis": 0, + "series_drop_last_bucket": 0, + "series_interval": "12h", "split_mode": "filter", "stacked": "none", + "time_range_mode": "entire_time_range", "value_template": "${{value}}" } ], "show_grid": 1, "show_legend": 1, - "time_field": null, + "time_field": "@timestamp", "type": "metric" }, "title": "Total Estimated Charges [Metricbeat AWS]", @@ -387,8 +391,8 @@ }, "references": [], "type": "visualization", - "updated_at": "2019-12-02T18:35:29.434Z", - "version": "WzU0NywyXQ==" + "updated_at": "2020-01-17T15:10:12.013Z", + "version": "WzY1OSwxXQ==" }, { "attributes": { @@ -472,9 +476,9 @@ }, "references": [], "type": "visualization", - "updated_at": "2019-12-02T18:49:11.978Z", - "version": "WzU1MSwyXQ==" + "updated_at": "2020-01-17T14:43:00.631Z", + "version": "WzU2LDFd" } ], - "version": "7.5.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-dynamodb-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-dynamodb-overview.json index 7f58be85cea6..c414bc2ea050 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-dynamodb-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-dynamodb-overview.json @@ -29,7 +29,7 @@ }, "panelIndex": "9642fcd0-464b-46ea-815c-cd2d9efc056d", "panelRefName": "panel_0", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -42,7 +42,7 @@ }, "panelIndex": "03807c37-c9dc-4b41-80ce-e28a13eb66e2", "panelRefName": "panel_1", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -55,7 +55,7 @@ }, "panelIndex": "18810035-5e91-409a-98d7-c03166e1f9b3", "panelRefName": "panel_2", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -68,7 +68,7 @@ }, "panelIndex": "60e6e6e4-3237-430e-80fc-63fe9b718a7e", "panelRefName": "panel_3", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -81,7 +81,7 @@ }, "panelIndex": "5e5d6509-2889-4383-bef4-9c2da0e759c6", "panelRefName": "panel_4", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -94,7 +94,7 @@ }, "panelIndex": "2db371a7-20c5-460a-a2f7-06b5b03d7b34", "panelRefName": "panel_5", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -107,7 +107,7 @@ }, "panelIndex": "1cd72292-ecba-4008-bb3c-e76c7fb6b4f4", "panelRefName": "panel_6", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -120,7 +120,7 @@ }, "panelIndex": "cea56791-2602-4079-977b-7d088727872c", "panelRefName": "panel_7", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -133,7 +133,7 @@ }, "panelIndex": "fe509a4d-700b-4092-9aae-ce14e8de103e", "panelRefName": "panel_8", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -146,7 +146,7 @@ }, "panelIndex": "3d6b1fa0-746f-40ed-8fcc-ae89013f632b", "panelRefName": "panel_9", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -159,7 +159,7 @@ }, "panelIndex": "0f9a5223-3c63-405c-a5c4-ea15416bad4a", "panelRefName": "panel_10", - "version": "7.4.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -1852,5 +1852,5 @@ "version": "WzUxMzMsMV0=" } ], - "version": "7.4.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-ebs-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-ebs-overview.json index 4ff1c55910bd..8c0ff27c60fc 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-ebs-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-ebs-overview.json @@ -30,7 +30,7 @@ "panelIndex": "1", "panelRefName": "panel_0", "title": "Volume Write Ops", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -44,7 +44,7 @@ "panelIndex": "2", "panelRefName": "panel_1", "title": "Volume Read Ops", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -58,7 +58,7 @@ "panelIndex": "3", "panelRefName": "panel_2", "title": "Volume Write Bytes", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -72,7 +72,7 @@ "panelIndex": "4", "panelRefName": "panel_3", "title": "Volume Read Bytes", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -86,7 +86,7 @@ "panelIndex": "5", "panelRefName": "panel_4", "title": "Volume Queue Length", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -100,7 +100,7 @@ "panelIndex": "6", "panelRefName": "panel_5", "title": "Volume Total Write Time", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -114,7 +114,7 @@ "panelIndex": "7", "panelRefName": "panel_6", "title": "Volume Total Read Time", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -128,7 +128,7 @@ "panelIndex": "8", "panelRefName": "panel_7", "title": "Volume Idle Time", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -142,7 +142,7 @@ "panelIndex": "9", "panelRefName": "panel_8", "title": "EBS Volume ID Filter", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -155,7 +155,7 @@ }, "panelIndex": "10", "panelRefName": "panel_9", - "version": "7.4.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -916,5 +916,5 @@ "version": "WzU2NTQsN10=" } ], - "version": "7.4.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-elb-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-elb-overview.json index cadca10e4dbb..6164c11bed41 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-elb-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-elb-overview.json @@ -30,7 +30,7 @@ "panelIndex": "2", "panelRefName": "panel_0", "title": "HTTP 5XX Errors", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -44,7 +44,7 @@ "panelIndex": "3", "panelRefName": "panel_1", "title": "Request Count", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -58,7 +58,7 @@ "panelIndex": "4", "panelRefName": "panel_2", "title": "Unhealthy Host Count", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -72,7 +72,7 @@ "panelIndex": "5", "panelRefName": "panel_3", "title": "Healthy Host Count", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -86,7 +86,7 @@ "panelIndex": "6", "panelRefName": "panel_4", "title": "Latency in Seconds", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -100,7 +100,7 @@ "panelIndex": "7", "panelRefName": "panel_5", "title": "HTTP Backend 4XX Errors", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -114,7 +114,7 @@ "panelIndex": "8", "panelRefName": "panel_6", "title": "Backend Connection Errors", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -127,7 +127,7 @@ }, "panelIndex": "9", "panelRefName": "panel_7", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -141,7 +141,7 @@ "panelIndex": "10", "panelRefName": "panel_8", "title": "HTTP Backend 2XX", - "version": "7.4.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -1006,5 +1006,5 @@ "version": "WzI0MTIsN10=" } ], - "version": "7.4.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-lambda-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-lambda-overview.json new file mode 100644 index 000000000000..ecee23ca0805 --- /dev/null +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-lambda-overview.json @@ -0,0 +1,654 @@ +{ + "objects": [ + { + "attributes": { + "description": "Overview of AWS Lambda Metrics", + "hits": 0, + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "optionsJSON": { + "hidePanelTitles": false, + "useMargins": true + }, + "panelsJSON": [ + { + "embeddableConfig": { + "title": "AWS Account Filter" + }, + "gridData": { + "h": 5, + "i": "8f2d1b8f-fef3-4a9a-9cc8-7f0e2c65e35a", + "w": 14, + "x": 0, + "y": 0 + }, + "panelIndex": "8f2d1b8f-fef3-4a9a-9cc8-7f0e2c65e35a", + "panelRefName": "panel_0", + "title": "AWS Account Filter", + "version": "7.3.0" + }, + { + "embeddableConfig": { + "title": "Top Errors" + }, + "gridData": { + "h": 10, + "i": "443a9699-3451-44f7-8415-99a16c3f45b3", + "w": 34, + "x": 14, + "y": 0 + }, + "panelIndex": "443a9699-3451-44f7-8415-99a16c3f45b3", + "panelRefName": "panel_1", + "title": "Top Errors", + "version": "7.3.0" + }, + { + "embeddableConfig": { + "title": "AWS Region Filter" + }, + "gridData": { + "h": 5, + "i": "60a16bf0-2979-467a-b30e-05ea29547b41", + "w": 14, + "x": 0, + "y": 5 + }, + "panelIndex": "60a16bf0-2979-467a-b30e-05ea29547b41", + "panelRefName": "panel_2", + "title": "AWS Region Filter", + "version": "7.3.0" + }, + { + "embeddableConfig": { + "title": "Lambda Function Duration in Milliseconds" + }, + "gridData": { + "h": 14, + "i": "349ef0d1-fea1-4b91-b95d-7a668914e10b", + "w": 48, + "x": 0, + "y": 10 + }, + "panelIndex": "349ef0d1-fea1-4b91-b95d-7a668914e10b", + "panelRefName": "panel_3", + "title": "Lambda Function Duration in Milliseconds", + "version": "7.3.0" + }, + { + "embeddableConfig": { + "title": "Top Invoked Lambda Functions" + }, + "gridData": { + "h": 9, + "i": "048b1577-5aed-48e5-8f90-147aa3d56c1a", + "w": 24, + "x": 0, + "y": 24 + }, + "panelIndex": "048b1577-5aed-48e5-8f90-147aa3d56c1a", + "panelRefName": "panel_4", + "title": "Top Invoked Lambda Functions", + "version": "7.3.0" + }, + { + "embeddableConfig": { + "title": "Top Throttled Lambda Functions" + }, + "gridData": { + "h": 9, + "i": "4c8e471c-45da-47be-a866-c5bfc6d28a05", + "w": 24, + "x": 24, + "y": 24 + }, + "panelIndex": "4c8e471c-45da-47be-a866-c5bfc6d28a05", + "panelRefName": "panel_5", + "title": "Top Throttled Lambda Functions", + "version": "7.3.0" + } + ], + "timeRestore": false, + "title": "[Metricbeat AWS] Lambda Overview", + "version": 1 + }, + "id": "7ac8e1d0-28d2-11ea-ba6c-49a884eb104f", + "migrationVersion": { + "dashboard": "7.3.0" + }, + "references": [ + { + "id": "deab0260-2981-11e9-86eb-a3a07a77f530", + "name": "panel_0", + "type": "visualization" + }, + { + "id": "4bf0a740-28d1-11ea-ba6c-49a884eb104f", + "name": "panel_1", + "type": "visualization" + }, + { + "id": "b5308940-7347-11e9-816b-07687310a99a", + "name": "panel_2", + "type": "visualization" + }, + { + "id": "39dfc8d0-28cf-11ea-ba6c-49a884eb104f", + "name": "panel_3", + "type": "visualization" + }, + { + "id": "1f3f00c0-28d1-11ea-ba6c-49a884eb104f", + "name": "panel_4", + "type": "visualization" + }, + { + "id": "915bcd50-28d1-11ea-ba6c-49a884eb104f", + "name": "panel_5", + "type": "visualization" + } + ], + "type": "dashboard", + "updated_at": "2020-01-09T23:14:56.640Z", + "version": "WzI3MDAsM10=" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "AWS Account Filter [Metricbeat AWS]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "controls": [ + { + "fieldName": "cloud.account.name", + "id": "1549397251041", + "indexPatternRefName": "control_0_index_pattern", + "label": "account name", + "options": { + "dynamicOptions": true, + "multiselect": true, + "order": "desc", + "size": 5, + "type": "terms" + }, + "parent": "", + "type": "list" + } + ], + "pinFilters": false, + "updateFiltersOnChange": true, + "useTimeFilter": false + }, + "title": "AWS Account Filter [Metricbeat AWS]", + "type": "input_control_vis" + } + }, + "id": "deab0260-2981-11e9-86eb-a3a07a77f530", + "migrationVersion": { + "visualization": "7.3.0" + }, + "references": [ + { + "id": "metricbeat-*", + "name": "control_0_index_pattern", + "type": "index-pattern" + } + ], + "type": "visualization", + "updated_at": "2020-01-09T22:47:51.746Z", + "version": "WzIzMTUsM10=" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Lambda Top Errors [Metricbeat AWS]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": 0, + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "fbf0eac0-28d0-11ea-8789-f72e3366fb25" + } + ], + "bar_color_rules": [ + { + "id": "f679afa0-28d0-11ea-8789-f72e3366fb25" + } + ], + "default_index_pattern": "filebeat-*", + "default_timefield": "@timestamp", + "filter": { + "language": "kuery", + "query": "" + }, + "gauge_color_rules": [ + { + "id": "3eabbde0-28d1-11ea-8789-f72e3366fb25" + } + ], + "gauge_inner_width": 10, + "gauge_style": "half", + "gauge_width": 10, + "id": "ca2e4c60-28cd-11ea-822d-3ba2c0089081", + "index_pattern": "metricbeat-*", + "interval": "5m", + "isModelInvalid": false, + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#3185FC", + "fill": 0, + "filter": { + "language": "kuery", + "query": "" + }, + "formatter": "number", + "id": "ca2e4c61-28cd-11ea-822d-3ba2c0089081", + "label": "avg(aws.metrics.Duration.avg)", + "line_width": 2, + "metrics": [ + { + "field": "aws.lambda.metrics.Errors.avg", + "id": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "max" + } + ], + "point_size": "4", + "separate_axis": 0, + "split_color_mode": "rainbow", + "split_mode": "terms", + "stacked": "none", + "terms_field": "aws.dimensions.FunctionName", + "terms_order_by": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "timeseries", + "value_template": "{{value}}" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "@timestamp", + "type": "timeseries" + }, + "title": "Lambda Top Errors [Metricbeat AWS]", + "type": "metrics" + } + }, + "id": "4bf0a740-28d1-11ea-ba6c-49a884eb104f", + "migrationVersion": { + "visualization": "7.3.0" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-09T22:58:19.013Z", + "version": "WzI2OTAsM10=" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "AWS Region Filter [Metricbeat AWS]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "controls": [ + { + "fieldName": "cloud.region", + "id": "1549397251041", + "indexPatternRefName": "control_0_index_pattern", + "label": "region name", + "options": { + "dynamicOptions": true, + "multiselect": true, + "order": "desc", + "size": 5, + "type": "terms" + }, + "parent": "", + "type": "list" + } + ], + "pinFilters": false, + "updateFiltersOnChange": true, + "useTimeFilter": false + }, + "title": "AWS Region Filter", + "type": "input_control_vis" + } + }, + "id": "b5308940-7347-11e9-816b-07687310a99a", + "migrationVersion": { + "visualization": "7.3.0" + }, + "references": [ + { + "id": "metricbeat-*", + "name": "control_0_index_pattern", + "type": "index-pattern" + } + ], + "type": "visualization", + "updated_at": "2020-01-09T22:47:51.746Z", + "version": "WzIzMTIsM10=" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Lambda Duration in Milliseconds [Metricbeat AWS]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": 0, + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "filebeat-*", + "default_timefield": "@timestamp", + "filter": { + "language": "kuery", + "query": "" + }, + "id": "ca2e4c60-28cd-11ea-822d-3ba2c0089081", + "index_pattern": "metricbeat-*", + "interval": "5m", + "isModelInvalid": false, + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#3185FC", + "fill": 0, + "filter": { + "language": "kuery", + "query": "" + }, + "formatter": "number", + "id": "ca2e4c61-28cd-11ea-822d-3ba2c0089081", + "label": "avg(aws.metrics.Duration.avg)", + "line_width": 2, + "metrics": [ + { + "field": "aws.lambda.metrics.Duration.avg", + "id": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "avg" + } + ], + "point_size": "4", + "separate_axis": 0, + "split_color_mode": "rainbow", + "split_mode": "terms", + "stacked": "none", + "terms_field": "aws.dimensions.FunctionName", + "terms_order_by": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "timeseries", + "value_template": "{{value}}" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "@timestamp", + "type": "timeseries" + }, + "title": "Lambda Duration in Milliseconds [Metricbeat AWS]", + "type": "metrics" + } + }, + "id": "39dfc8d0-28cf-11ea-ba6c-49a884eb104f", + "migrationVersion": { + "visualization": "7.3.0" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-09T23:09:15.122Z", + "version": "WzI2OTgsM10=" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Lambda Top Invoked Functions [Metricbeat AWS]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": 0, + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "fbf0eac0-28d0-11ea-8789-f72e3366fb25" + } + ], + "bar_color_rules": [ + { + "id": "f679afa0-28d0-11ea-8789-f72e3366fb25" + } + ], + "default_index_pattern": "filebeat-*", + "default_timefield": "@timestamp", + "filter": { + "language": "kuery", + "query": "" + }, + "id": "ca2e4c60-28cd-11ea-822d-3ba2c0089081", + "index_pattern": "metricbeat-*", + "interval": "5m", + "isModelInvalid": false, + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#3185FC", + "fill": 0, + "filter": { + "language": "kuery", + "query": "" + }, + "formatter": "number", + "id": "ca2e4c61-28cd-11ea-822d-3ba2c0089081", + "label": "avg(aws.metrics.Duration.avg)", + "line_width": 2, + "metrics": [ + { + "field": "aws.lambda.metrics.Invocations.avg", + "id": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "max" + } + ], + "point_size": "4", + "separate_axis": 0, + "split_color_mode": "rainbow", + "split_mode": "terms", + "stacked": "none", + "terms_field": "aws.dimensions.FunctionName", + "terms_order_by": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "timeseries", + "value_template": "{{value}}" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "@timestamp", + "type": "top_n" + }, + "title": "Lambda Top Invoked Functions [Metricbeat AWS]", + "type": "metrics" + } + }, + "id": "1f3f00c0-28d1-11ea-ba6c-49a884eb104f", + "migrationVersion": { + "visualization": "7.3.0" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-09T22:59:29.714Z", + "version": "WzI2OTEsM10=" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Lambda Top Throttles [Metricbeat AWS]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": 0, + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "fbf0eac0-28d0-11ea-8789-f72e3366fb25" + } + ], + "bar_color_rules": [ + { + "id": "f679afa0-28d0-11ea-8789-f72e3366fb25" + } + ], + "default_index_pattern": "filebeat-*", + "default_timefield": "@timestamp", + "filter": { + "language": "kuery", + "query": "" + }, + "gauge_color_rules": [ + { + "id": "3eabbde0-28d1-11ea-8789-f72e3366fb25" + } + ], + "gauge_inner_width": 10, + "gauge_style": "half", + "gauge_width": 10, + "id": "ca2e4c60-28cd-11ea-822d-3ba2c0089081", + "index_pattern": "metricbeat-*", + "interval": "5m", + "isModelInvalid": false, + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#3185FC", + "fill": 0, + "filter": { + "language": "kuery", + "query": "" + }, + "formatter": "number", + "id": "ca2e4c61-28cd-11ea-822d-3ba2c0089081", + "label": "avg(aws.metrics.Duration.avg)", + "line_width": 2, + "metrics": [ + { + "field": "aws.lambda.metrics.Duration.avg", + "id": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "max" + } + ], + "point_size": "4", + "separate_axis": 0, + "split_color_mode": "rainbow", + "split_mode": "terms", + "stacked": "none", + "terms_field": "aws.dimensions.FunctionName", + "terms_order_by": "ca2e4c62-28cd-11ea-822d-3ba2c0089081", + "type": "timeseries", + "value_template": "{{value}}" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "@timestamp", + "type": "top_n" + }, + "title": "Lambda Top Throttles [Metricbeat AWS]", + "type": "metrics" + } + }, + "id": "915bcd50-28d1-11ea-ba6c-49a884eb104f", + "migrationVersion": { + "visualization": "7.3.0" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-09T22:59:58.449Z", + "version": "WzI2OTIsM10=" + } + ], + "version": "7.3.0" +} diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-rds-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-rds-overview.json index 5d8633eb2e1f..367104574b64 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-rds-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-rds-overview.json @@ -32,7 +32,7 @@ "panelIndex": "1", "panelRefName": "panel_0", "title": "Database Connections", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -48,7 +48,7 @@ "panelIndex": "3", "panelRefName": "panel_1", "title": "Insert Latency in Milliseconds", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -64,7 +64,7 @@ "panelIndex": "4", "panelRefName": "panel_2", "title": "Select Latency in Milliseconds", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -80,7 +80,7 @@ "panelIndex": "5", "panelRefName": "panel_3", "title": "Transaction Blocked", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -93,7 +93,7 @@ }, "panelIndex": "6", "panelRefName": "panel_4", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -109,7 +109,7 @@ "panelIndex": "7", "panelRefName": "panel_5", "title": "Insert Throughput in Count/Second", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -125,7 +125,7 @@ "panelIndex": "8", "panelRefName": "panel_6", "title": "Select Throughput in Count/Second", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -141,7 +141,7 @@ "panelIndex": "132653bc-2669-4e8c-b536-06c680e9acf0", "panelRefName": "panel_7", "title": "Disk Queue Depth", - "version": "7.4.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -843,5 +843,5 @@ "version": "WzExNTk1LDhd" } ], - "version": "7.4.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-s3-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-s3-overview.json index d69c483c42a0..5701193ce165 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-s3-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-s3-overview.json @@ -29,7 +29,7 @@ }, "panelIndex": "1", "panelRefName": "panel_0", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -42,7 +42,7 @@ }, "panelIndex": "2", "panelRefName": "panel_1", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -55,7 +55,7 @@ }, "panelIndex": "3", "panelRefName": "panel_2", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -68,7 +68,7 @@ }, "panelIndex": "4", "panelRefName": "panel_3", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -81,7 +81,7 @@ }, "panelIndex": "5", "panelRefName": "panel_4", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -94,7 +94,7 @@ }, "panelIndex": "6", "panelRefName": "panel_5", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -107,7 +107,7 @@ }, "panelIndex": "7", "panelRefName": "panel_6", - "version": "7.4.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -713,5 +713,5 @@ "version": "Wzc0NDAsN10=" } ], - "version": "7.4.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-sns-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-sns-overview.json index ea5cf1fe0be7..e3d9af42885d 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-sns-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-sns-overview.json @@ -55,7 +55,7 @@ }, "panelIndex": "3b9b0cee-b175-4268-8c5b-4ce869a09caf", "panelRefName": "panel_0", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -71,7 +71,7 @@ "panelIndex": "5f0d72c5-0f28-449f-9c93-3b4074f068f7", "panelRefName": "panel_1", "title": "SNS Messages and Notifications", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": {}, @@ -84,7 +84,7 @@ }, "panelIndex": "5a9d5f2f-b075-4892-8188-c6e808a1163d", "panelRefName": "panel_2", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -100,7 +100,7 @@ "panelIndex": "c6d5a54d-61a4-470b-8769-c5b6d6ab6c0f", "panelRefName": "panel_3", "title": "SNS Publish Size", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -116,7 +116,7 @@ "panelIndex": "0684c25d-34e8-425e-9069-dd8364e6325b", "panelRefName": "panel_4", "title": "SNS Notifications Filtered Out", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -132,7 +132,7 @@ "panelIndex": "72e987da-9a49-4dd4-99c4-4acbc49a0e0b", "panelRefName": "panel_5", "title": "SNS Notifications Filtered Out Invalid Attributes", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -148,7 +148,7 @@ "panelIndex": "923bd4cd-d8fe-47b5-afcf-577bf2c5987c", "panelRefName": "panel_6", "title": "SNS Notifications Filtered Out No Message Attributes", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -164,7 +164,7 @@ "panelIndex": "f176153f-4588-42f9-a7bb-3015909d5610", "panelRefName": "panel_7", "title": "SNS Notifications Failed to Redrive to DLQ", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -180,7 +180,7 @@ "panelIndex": "f3c5915b-6848-4950-afca-53653d13d6af", "panelRefName": "panel_8", "title": "SNS SMS Success Rate", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -196,7 +196,7 @@ "panelIndex": "3b3cc747-b57c-44e0-a18c-77155072bee4", "panelRefName": "panel_9", "title": "SNS Notifications Redriven To DLQ", - "version": "7.4.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -212,7 +212,7 @@ "panelIndex": "ee130150-c1de-465b-8a8e-013f466528bf", "panelRefName": "panel_10", "title": "SNS SMS Month To Date Spent USD", - "version": "7.4.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -1122,5 +1122,5 @@ "version": "WzU1MywxXQ==" } ], - "version": "7.4.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-usage-overview.json b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-usage-overview.json index da51dceb5cfc..2f96fae244be 100644 --- a/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-usage-overview.json +++ b/x-pack/metricbeat/module/aws/_meta/kibana/7/dashboard/Metricbeat-aws-usage-overview.json @@ -32,7 +32,7 @@ "panelIndex": "2ea7bd59-d748-4e4a-889d-f7e2ca1cfe36", "panelRefName": "panel_0", "title": "Region Filter", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -48,7 +48,7 @@ "panelIndex": "00c2b1f6-3367-4b6f-ac01-7e48b76c262a", "panelRefName": "panel_1", "title": "Usage Resource Count", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -64,7 +64,7 @@ "panelIndex": "fecfe5d4-ef1c-4f38-954a-a2506d72bc5b", "panelRefName": "panel_2", "title": "Usage API Call Count", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -80,7 +80,7 @@ "panelIndex": "69ce7461-36ad-4e7c-b541-c6a1601bf089", "panelRefName": "panel_3", "title": "AWS Account Filter", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -96,7 +96,7 @@ "panelIndex": "62e86407-6ae3-47d3-9136-dd61bdf3267a", "panelRefName": "panel_4", "title": "AWS Service Filter", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -112,7 +112,7 @@ "panelIndex": "196a044c-5c20-4417-8aa0-f60fc502e46c", "panelRefName": "panel_5", "title": "Usage Resource Count Per Service", - "version": "7.5.0" + "version": "7.3.0" }, { "embeddableConfig": { @@ -128,7 +128,7 @@ "panelIndex": "022941b7-01a1-4570-86e9-d03451d4e102", "panelRefName": "panel_6", "title": "Usage API Call Count Per Service", - "version": "7.5.0" + "version": "7.3.0" } ], "timeRestore": false, @@ -819,5 +819,5 @@ "version": "WzU4OSw0XQ==" } ], - "version": "7.5.0" + "version": "7.3.0" } diff --git a/x-pack/metricbeat/module/aws/fields.go b/x-pack/metricbeat/module/aws/fields.go index 3ee30ebaa255..9ac2b8fba44a 100644 --- a/x-pack/metricbeat/module/aws/fields.go +++ b/x-pack/metricbeat/module/aws/fields.go @@ -19,5 +19,5 @@ func init() { // AssetAws returns asset data. // This is the base64 encoded gzipped contents of module/aws. func AssetAws() string { - return "eJzsXV9v27iyf++nGOxTu0h87tndcx/ycIE0Du4JkLZpnEUfVZoc2zyhSIV/4rjYD3/BP5JlW44tW3K6wO3DbmHZ5O83MxwOZ4bqOTzi4gLI3LwDsNwKvIBfyNz88g6AoaGaF5YreQH/8w4A4DuZm++QK+YEAlVCILUGLr+NIFeSW6W5nEKOVnNqYKJVHp5dCeXYnFg6G7wD0CiQGLyAKXkHMOEomLkIo5+DJDleAPXfHxBKlZN24D8LjwHsosALj3iuNEufNaD0fx5mGMeBNE4YG5QGIjgx4AwysAo4Q2n5ZAGMTyaoUVrwH1iOBrgEArkTlp9blCQ8euZayRylHaxAjgJcYpxq5YrXENZ51weyZGoGv1Yfl+Op8X+Q2trH8YOsSSJrj7OcFAWX0/TdX379pfa9LdILEiRTPzA8E+EQCsJ1UimZG9BolNMUzWCDgfl9MHb0EVc0t017OzB8DjqbAIHR75BG3ZiQ8Ryl4Ur+JIL7FOy/DmvdVAa/DtIiGfxaYW7Au4KVKTcWuPlkK8wtED+l5WlnxIJG67REFjW7XKhweXcDTw71YlPeYy4El9MNUddtfoeIvqcxvgNV0hIuPRwENJbnxCIDOiN6igYmSsNCOR38SLmSuVxzKeWfyrWM0ZLa59sWG61GOYrMcpg1Pnld1HPUCIZqUpTirnzjtyDy+YzT2XKABo9qvHsa132V52EKsrIQ113sNinUJVGNs/J0+5rdIRJIHrgaFkyBlE84MpjPUEbTqskfSMEbVvZCklyx8VHaKQc5kW78D4dhyuHHY2wTx+Yo2jg2J2R8/XHU3gArqvS346jS305J9eq349YaLdzAKkvEoFjZmZbkDSUCWTYRiqx/YY9FV6CmKC2Zxq1TCEWDT72++g2oygtnEZzkNomHaATqtHcnYuF9qzMISgY5cmkskRQHW4lQjYzbzBkybfYdQq1sFXtykC4fo/b4r+7+hDiJ8U4k6qGOLewR/lvOcsF/ED/sTrxjIvxve0GMJOyodeBR0HKJeUaM3860QwaG+0+4hTkxIIiTdIbMR6rGEm2RbSdjnC6EM9kJSKWpVhnNyDPCGFEuNUMkOCl4zr3FVXSDz/c/u7r78yqM8DFiTdElN/ADtdqXqclifLC+I3VENXBpJOzXilTWR8MMmJpLT3lT32dAJEtuxc6cP0lQp71sCGPcoyAihTjNlCXaudKPAy4HBfFRr+mFaRobNFLkz97opPcX5fTApUU98dHF+qLbF3ZWoM4M0l7hF6jBIFWSRUetnO2KiXL2JBroEfffXQVcDsYLi3vLf6J0TuwFNP2oFbcwQB9rIwzcq1oi9JpSumbh7esttdLHenkDtXRFg3HzyNVAI2EnU8vHtDxICqo9/mrDN1ZphGclXI4GyDPhgowFglWHsOlYKTXkNV30Q2KuucUT68TPaVF6nH3y6UUrJfaaYvqgEWxLFV1t6lcqLwT6kDdYlSpQh3OIOdyqYvZ5mTYpUHPFvBexPN+PXMcK2k6yi1V0BN9ok31oM4xcZ3qgLXZBrjdtbpA8fu0dwtdYYp0Z0BnSx2xCuOjseHePhdLW+FOonaFeRepP4gUxBhmMlZ2tPoyYIGAKZzr/1CyMxXz1GY/5EkGMhZxLZ/cnmcXxTsy1DyLlPG9ApVlj+5KptgyqtP+Pk815OR+WTVEfnc5SOpU2du9X1VOekykOePOaODhBfzMMi9LD8ONXZdGYhmqDb5k1HXgddFhIuJGMUxKCg2QJDG2wuHqqlhtA6X3RlnxZBbTQ/JlYHDBpsrUKZQcCTaPD8PMoVZyjeDci+z1R8qLZEtc/bgHt5u75DyCMaTQGiDGK8pAfnvPk/lpjdWPBaV8CDYNvyHNPq0zQOpRiKbiE49o7F07h5q568t4L+AOMlYsb6CEiDUtoQBVrlubBjiiMuy7DMyAGCPzzv8/H3IKThk9lyN6GSfZC2r3eG5HC+wIl88v9L9BOyvg3M3PWcjk9DxnZv8CizrkMNv2Xj1hCebz8K7IPOxjZmY9vY7zlXXVfW0GaJ4Rb5bawWQNFcVz5E8UpK5/Xt81Fz73qgJodV/LU7JQlz/vhASVP6KoOWCYJUrHvAL+6UiVsWz/7/3rfqet9jFgyJgYzqqREGk5qvdApJ4LaRKksvAXZuIr8B0TL7raBy5z8UBLuU69Z6AZ7f3n/+UMwASR05l3GblBUENMsq4NgXdU9TD0mKYvr/qCYY670AigpCOV2AQFD+cXhx11Zqhr61KHINzabLigQr1Z9blxRCH9Qr5S/nHUADzNuah/4UNuzcJI/OQw9gsHeq2/4YVtRjIe27uiNUuIh4kzNDfWIgpuK6fbkS/bk0GHGsLCzRmwH9mcsl5py1ksgBDQ3Xwy89wHBP2I+RuOTQ2PNB5gT7qObkIuh1EeYnpVH2Iy9TCs8icygfkadkSlKm/1HjfvxGHFCGH29hVGYEC79hOAnBObCDrrXOXyiEf0RLour56QVJpKH3kI1qWW1NJFM5aXUE6ityDNjlfaH9LeGnXBA6LxrxpuTF567PPPn/cxqIg0Jnj7jrEsbSdNAbQa4GZbNIyb2jngMA7gMHihkWO+UsVONo6+3zeCVYGhsprEQnIb9PzNC2UyQ6SAfdwhfkOnUG6/hPyonn2atnoUwU5nQleoPHsHJf7u8DQ6mqru24ue9QMbVzqTvgf6HPGMwj9qWz81jTOrf/ONLcyZ4G9IgjCD5Fqnp0uaZixMdY/aW5wgE7j36+6Sb2ubj9eTtbMbL7G2MJer7U103nxajr7dn8IloToYfYyPPUl8r02yJPMycFDE+fiNH4AHEtR/TeamXb4Vx2NLjqcZv51LZmv/w0dXSmTezrPsMoaYmm6LERm0eswCDYdao+KNAzZX4iVutrLC1nn5pxR295dp6cqj5/uZzELo0B+ALUmeR7QTFkDCh6GO/sKpZygx+FZbuwhcrU2Fbe6vVlzbf0l7j7QWnlV7xS2eeWpjslYBCKErEG4cVJZ9V92AxL5QmegHWf2aCn/TLcRcvoaZchoKR0z0bdwpLw4xArIdsdy87O9PKTWeFswOq8pw3Z2Y68w9xjjZ+oQaQocAtlZbuHFiYo/IUbdAx0S+04fC2Oia1Apb3DIxLg9qaM3AFIxZTP3CUZCukcaBTgD1Ewamo0Sm8yu+UFZPlfLE+X3Uyxk4TH9X5iCDn1sbSJRUcpTWxM5vOVnoS/F6SfHEI9LxHpsIZi3rpuA4QQZZQdSkKLqnK/Qnj/X0c/MNSJppMJpw2RHaeBRUuZBSCuKgzVuWol1to+WMvujLDNhxVH4d9y7v4WvqbhA7S6rS1t1RKzXQpFuXsVAWxPKTR/z5y8UesPhbz2mnIksfUQReiz5W4cydGgwK3lCM6czlxjkNcTnSo/aKLcxyCLsT5/YIbr3d4BhXvwiiIRUkXbSOaLs/pCUJYQhtBT3C+OReCRxbbgsdEo01k0ReHkN5hOOGSx7MokVPndfV+OLz9UMUlbZm1CE36YvZq9NKST8sApl9K5ZJuyaGV1+6AQVdOvcTf0qP3pYNVp99SBy39fl8cVreGlhza7Q4/kSGFYljKaPKQ7nujXEQtHagodQWPKZYxl0QvQvqhDP1y4mP6zcxuzHLrVxO4NbrrJYZuywsN2c3ahOAnhAkX2C7HWYO/nqTtHf5Rydnaj83A/3/Leaqr/FBZF67PmzKhPrhXEogsT4vLunh5mtwZFtbZjIWij51eV96ks0JjPW9a3V5OSHYnemvleTbO0iE566MZ4cD2ggSpvGVAiRDRSafD2zLnmr65m6hWosMuxuFH8AMaEPwR4dv9zcP1PSgN99eXw+v7sy6Bo5xyiR13314TOlsppWknk+zjfGeR2XrJrFYu84EjWtr0kqiMES4WZXL73TriNn1/64OtNQGGZ1Ueu8+WwNHvx3UEpjdcGf7jrbL94VRTJcLCbR8WL/Osv32rjjs6oUxNsvheqi7z+qFDst4zE2dowBbWEBGiUnXouWp+RVmWzmfH2l0apmZx6ZOf2s7Kw2lsP+1RWf9+eLhbpuhzwsJNMb+tRldXvdXtDDROiWaibK1fFFs6VyrsU2w+0xzbzRQw/+/1wxpub1yl7XHZxGEH3sL1iPfuz87xvlJc6gTy8Pr2+uG6a9SzbYf1TjD/+/pyuJc977IFZfo0hi+jdWs4COUriYNjcS6RjK5vr68e4EtQemhM9Y6uY6uITDJDiZQn7gxYT92Wm2zCEm9d7S2OY9iX73r8KehXL548AX/B+1xtFbaw0/u5UjN4gG7i60tfw8nUXApF2NtoJqpliSEstv227PnMhzWxKdIUSobjMRWOhSPaWLEtzbKueGu6JYKosxR2AamCN4/9rL3nRK2VNoM/Xl76M7c/Xl5SiTtOV914Vgz30ltccSS9eU1NAHm4VPtf/mz6z1eJ/atPYv96eYHYr35CYmVudsK1sZk3jkHe2iIPT9EWqM9LmwvJ2XA2KO+ShMabyiTRnwaq7of4qr0NEVgV37W3sijDvaKQghtj5Xhfl0cI5MvTzUlFgoIUJiaotogm6Cos5KU40jXbcMMgPAnHpV1rtzoPyuMuHhp5youHo8/NFw/3fLGseTqS7NNJyX498pZluiyQozFkihmZ4ok71YtCq5fwWmtIly29zCIskEqex4MWgwSxzN2G20dbbm/Eb4YzGll0l1de9crlLCuAlpnkNHfI8W32j2skodmG5zkyTiyKLcFAxUUqmz1zw8dbcrDHbjIVnYoBlzARfDrbsptXyE6Cal18VnN8JmLp9va0B29K/SIt7bUVstJT9wutOlWMF0CJEKbcGFIL3ae0xGKZZAdks3mvumudMxb3LvKaDDEv7KLsMOznBt+aeC7vbkrx+bXCeFzhUbpASgJb7r+gXLrbk6eyy6taLWUcH3VbPxl9HSWfuTJudQoyx9Y9wghb9uL1f8/BrwNN6GOcNjThqRzDFtvwb3nsjir+LwAA//+R4FlO" + return "eJzsXV9z27ayf8+n2OlT0nF07ml77kMe7oxjee7xjNM6ljt9ZCFgReEYBGj8saxMP/wd/CFFSZQlSqScztw8tBlRAn6/3cVisbtgPsIjLj8BWZh3AJZbgZ/gB7IwP7wDYGio5qXlSn6C/3kHAPAnWZg/oVDMCQSqhEBqDVz+MYFCSW6V5jKHAq3m1MBMqyI8uxLKsQWxdD56B6BRIDH4CXLyDmDGUTDzKYz+ESQp8BNQ//0RoVQ5aUf+s/AYwC5L/OQRL5Rm6bMWlP7PwxzjOJDGCWOD0kAEJwacQQZWAWcoLZ8tgfHZDDVKC/4Dy9EAl0CgcMLyjxYlCY+euVayQGlHa5CjAFcYc61c+RrCJu/mQJbkZvRj/XE1npr+B6ltfBw/yNoksvE4K0hZcpmn7/7w4w+N7+2QXpAgyf3A8EyEQygJ10mlZGFAo1FOUzSjLQbm59HU0Udc09wu7e3B8GvQ2QwITH6GNOrWhIwXKA1X8jsR3Jdg/01Ym6Yy+nGUFsnoxxpzC941rEy5qcDtJzth7oD4JS1POycWNFqnJbKo2dVChcu7G3hyqJfb8p5yIbjMt0TdtPk9IvozjfEnUCUt4dLDQUBjeUEsMqBzonM0MFMalsrp4Eeqlczlhkup/tSuZYqWND7ftdhoPcpJZFbDbPApmqJeoEYwVJOyEnftG/8IIl/MOZ2vBmjxqMa7p2nTV3kepiRrC3HTxe6SQlMS9ThrT3ev2T0igeSB62HBlEj5jCODxRxlNK2G/IGUvGVlLyUpFJuepJ1qkDPpxv9wHKYcfz7FNnFqTqKNU3NGxtefJ90NsKZKfzqNKv3pnFSvfjptrdHSjayyRIzKtZ1pRd5QIpBlM6HI5hcOWHQlaorSkjxunUIoGnzq9dVPQFVROovgJLdJPEQjUKe9OxFL71udQVAyyJFLY4mkONpJhGpk3GbOkLzddwi1tlUcyEG6Yora47+6+x3iJMY7kaiHJrawR/hvOcsF/0b8sHvxTonwvx0EMZKwozaBR0HLFeY5MX470w4ZGO4/4RYWxIAgTtI5Mh+pGku0RbabjHG6FM5kZyCVplpnNCfPCFNEudIMkeCk4AX3FlfTDT7f/+zq7verMMLniDVFl9zAN9TqUKYmi/HB5o7UE9XApZWwXytSWR8NM2BqIT3lbX1fAJEsuRU7d/4kQZ32siGMcY+CiBTitFOWaBdKP464HJXER71mEKZpbNBIkT97o5PeX1TTA5cW9cxHF5uL7lDYWYk6M0gHhV+iBoNUSRYdtXK2LybK2bNoYEDcf3cVcDmaLi0eLP+Z0gWxn6DtR524hQGGWBth4EHVEqE3lNI3C29fb6mVIdbLG6ilLxqMm0euRhoJO5taPqflQVJQ7fHXG76xSiM8K+EKNECeCRdkKhCsOoZNz0ppIG/oYhgSC80tnlknfk6L0uMcks8gWqmwNxQzBI1gW6rsa1O/UkUp0Ie8wapUiTqcQ8zxVhWzz6u0SYmaK+a9iOXFYeR6VtBukn2sohP4RpscQpth5CbTI22xD3KDaXOL5Olr7xi+xhLrzIjOkT5mM8JFb8e7eyyVtsafQu0c9TpSfxIviTHIYKrsfP1hxAQBUzjT+admaSwW6894zJcIYiwUXDp7OMksjndmrkMQqeZ5AyrtGjuUTL1lUKX9f5xsz8v5sCxHfXI6S+lU2ti/X9VPeUFyHPH2NXF0gv5mHBalh+HHr8uiMQ3VBd8qazryOuixkHAjGackBAfJEhjaYHHNVC03gNL7oh35shpoqfkzsThi0mQbFcoeBJpGh/Gvk1RxjuLdiuwPRMnLdkvc/LgDtJu751+AMKbRGCDGKMpDfnjBk/vrjNVNBadDCTQMviXPA60yQetRipXgEo5r71w4hZu7+sl7L+APMFUubqDHiDQsoRFVrF2aRzuiMO6mDC+AGCDwz//+OOUWnDQ8lyF7GyY5CGn/em9FCu9LlMwv979AOynj38zcWctl/jFkZP8Ci7rgMtj0Xz5iCeXx6q/IPuxhZOc+vo3xlnfVQ20FaZ4QblXbwnYNFMVp5U8U56x8Xt+2Fz0PqgMKUkwZOYltHOKMhG/DhKcUejU7rdCr2TkLvffjIwq90Ff1s0qNpBLnEbvJWm20a9Xw/6uc565yMmLJlBjMqJISaTifDkKnmggaE6Vi+A5k0/q8MyJa9rf5XRbkm5JwnzrsQg/c+8v7Xz8EE0BC595l7AdFBTHtsjoK1lXTwzQjsaqlwB+PCyyUXgIlJaHcLiFgqL44/rwvN9dAn/oy+dYW2wcF4tWqPxpXloIjWyl/NesIHubcND7wBwzPwkn+5DB0RgZ7r7/hh+1EMR5V+6M3SemWiDO1dDTjKG5qprtTTtmTQ4cZw9LOW7Ed2ZWyWmrKWS+BEMbd/GbgvQ+D/hGzUBqfHBprPsCCcB/ThQwUpT6u9qw8wnbsVTLlSWQG9TPqjOQobfYfNR3GY8QJYfL1FiZhQrj0E4KfEJgLO+hB2YeZRvQH1yyunrPW1UgROirVrJHL00QyVVRST6B2Is+MVZrk56tx7IKdcEDoN2zHW5AXXrgicwZZZjWRhgRPn3HWp42kaaAxA9yMq5YZEztmPIYRXAYPFPLKd8rYXOPk6207eCUYGptpLAWnYf/PjFA2EyQfFdMe4QuS5954Df9WO/k0a/0shJnKhF5cf9wKTv6Py9vgYOpqcyd+3gtkXO1NdR/pf8gzBvNobPncPMZSxs0/fmvPf+9CGoQRJN8hIV/ZPHNxolPM3vICgcC9R3+fdNPYfLyevJ3NeZWzjrFEc39q6ubLcvL19gK+EM3J+HNsX1rpa22aHZGHWZAyxsdv5Ag8gLj2YxIzdTCuMQ5bejzV+O1cKtvwHz66WjnzdpZNnyFUbrIcJbZq85QFGAyzQcUfBRquxE/caWWFrfX8Syvu6B3X1pNDzQ83n6PQpTkAX5A6i2wvKIaECUUfh4VVz1LVLeqwdB++WI8L29pbrb60+Vb2Gu9sOK30ml+68NTCZK8EFEJRIt44rKj4rLsHi0WpNNFLsP4zE/ykX477eAmVcxnKZE4PbNwpLA0zArEest2/7OxcK5fPS2dHVBUFb8/M9OYf4hxd/EIDIEOBO+pL/TmwMEftKbqgY2JYaOPxbX1M6gSsGBgYlwa1NRfgSkYspi7oKMlOSONA5wB7jIJTKadXeLXfqepEq/liV0Ldvxn7a3xU5yOCglsbC7ZUcJTWxH50Ol/rxPB7SfLFIdDzHpkKZyzqleM6QgRZQtWnKLikqvAnjPf3cfAPK5loMptx2hLZeRZUuJBRCOKizlhVoF5todWPveiqDNt4Un8c9i3v4hvpbxL6ZuvT1sFSqTTTp1iUs7kKYnlIo/995OKPWEMs5o3TkCWPqW8wRJ9rcedejAYF7ihH9OZy4hzHuJzoUIdFF+c4Bl2I84cFN93saw0q3odREIuSLrtGNH2e0xOEsIS2gp7gfAsuBI8sdgWPiUaXyGIoDiG9w3DGJY9nUSJz53X1fjy+/VDHJV2ZdQhNhmL2avTSkU/HAGZYStWS7sihk9fugUFfTr3C39GjD6WDdaffUQcd/f5QHNa3ho4cuu0O35EhhWJYymjykO57o1xEIx2oKHUljymWKZdEL0P6oQr9CuJj+u3Mbsxy61cTuA26myWGfssLLdnNxoTgJ4QZF9gtx9mAv5mkHRz+ScnZxo/NyP9/x3mqr/xQVRduzpsyoT64VxKIrE6Lq7p4dZrcGxY22UyFoo+9XtLeprNGYzNvWt/ZTkj2J3ob5Xk2zdIhORuiGeHI9oIEqbpbQYkQ0Umnw9sq55q+uZ+oVqLH3s3xZ/ADGhD8EeGP+5uH63tQGu6vL8fX9xd9AkeZc4k99xxfEzpfK6VpJ5Ps43wXkdlmyaxRLvOBI1ra9mqsjBEullVy+90m4i59f5uDbTQBhmd1HnvIlsDJz6d1BKb3ehn+7a2y/eFUUyfCwh0nFq8wbb5zrIk7OqFMzbL4Nq4+8/qhQ7LZMxNnaMEW1hARolZ16LlqfzFbls5np9pdGqZhcemT79rOqsNpbD8dUFn/fni4W6XoC8LC/Ti/rUZXV7/L7gI05kQzUV0oWJY7Oldq7Dm2n2lO7WYKmP/3+mEDtzeuyva4bOOwB2/pBsR793vveF8pLvUCeXx9e/1w3Tfq+a7Dei+Y/319OT7InvfZgjJDGsNvk01rOArlK4mDU3GukEyub6+vHuC3oPTQmOodXc9WEZlkhhIpz9wZsJm6rTbZhCXeNTtYHKewr95w+V3Qr1+3eQb+gg+52mpsYaf3c6Vm8ADdxJe2voaTqYUUirC30UxUywpDWGyHbdmLuQ9rYlOkKZUMx2MqHAtHtKliO5plXfnWdCsEUWcp7AJSB28e+0V3z4laK21Gv7y8DGduv7y8pBJ3nK6+560YHqS3uOJIet+cmgHycJX4v/zZ9J+vEvvXkMT+9fICsV/9jMSq3OyMa2MzbxyjorNFHp+iLVF/rGwuJGfD2aC6SxIab2qTRH8aqLsf4gsGt0RgVXzD4NqiDPeKQgpuirXjfV0eIZCvTjdnFQkKUpqYoNohmqCrsJBX4kiXi8MNg/AkHJf2rd36PChPu3ho5DkvHk5+bb94eOAtS/N0Itmns5L9euIty3RZoEBjSI4ZyfHMneplqdVLeJk3pMuWXmYRFkglP8aDFoMEscrdhttHO25vxG+GMxpZ9pdXXvfK1SxrgFaZ5DR3yPFt949rJKHZhhcFMk4sih3BQM1FKps9c8OnO3Kwp24yNZ2aAZcwEzyf79jNa2RnQbUpPqs5PhOxcnsH2oM3pWGRVvbaCVnlqYeFVp8qpkugRAhTbQyphe5LWmKxTLIHstm+V923zhmLexd5TYZYlHZZdRgOc4NvQzyXdzeV+PxaYTyu8ChdIBWBHfdfUK7c7dlT2dVVrY4yjo/6rZ9Mvk6Sz1wbtz4FmVPrHmGEHXvx5r9i4deBJvQxThua8FSBYYtt+RdM9kcV/xcAAP//TCejCQ==" } diff --git a/x-pack/metricbeat/module/aws/lambda/_meta/data.json b/x-pack/metricbeat/module/aws/lambda/_meta/data.json new file mode 100644 index 000000000000..726a613616b3 --- /dev/null +++ b/x-pack/metricbeat/module/aws/lambda/_meta/data.json @@ -0,0 +1,48 @@ +{ + "@timestamp": "2017-10-12T08:05:34.853Z", + "aws": { + "cloudwatch": { + "namespace": "AWS/Lambda" + }, + "dimensions": { + "FunctionName": "ec2-owner-tagger-serverless", + "Resource": "ec2-owner-tagger-serverless" + }, + "lambda": { + "metrics": { + "Duration": { + "avg": 8218.073333333334 + }, + "Errors": { + "avg": 1 + }, + "Invocations": { + "avg": 1 + }, + "Throttles": { + "avg": 0 + } + } + } + }, + "cloud": { + "account": { + "id": "627959692251", + "name": "elastic-test" + }, + "provider": "aws", + "region": "us-west-2" + }, + "event": { + "dataset": "aws.lambda", + "duration": 115000, + "module": "aws" + }, + "metricset": { + "name": "lambda", + "period": 10000 + }, + "service": { + "type": "aws" + } +} \ No newline at end of file diff --git a/x-pack/metricbeat/module/aws/lambda/_meta/docs.asciidoc b/x-pack/metricbeat/module/aws/lambda/_meta/docs.asciidoc new file mode 100644 index 000000000000..1d1f8e22a7ef --- /dev/null +++ b/x-pack/metricbeat/module/aws/lambda/_meta/docs.asciidoc @@ -0,0 +1,55 @@ +AWS Lambda monitors functions and sends metrics to Amazon CloudWatch. These +metrics include total invocations, errors, duration, throttles, dead-letter +queue errors, and iterator age for stream-based invocations. + +[float] +=== AWS Permissions +Some specific AWS permissions are required for IAM user to collect AWS EBS metrics. +---- +ec2:DescribeRegions +cloudwatch:GetMetricData +cloudwatch:ListMetrics +tag:getResources +sts:GetCallerIdentity +iam:ListAccountAliases +---- + +[float] +=== Dashboard + +The aws lambda metricset comes with a predefined dashboard. For example: + +image::./images/metricbeat-aws-lambda-overview.png[] + +[float] +=== Configuration example +[source,yaml] +---- +- module: aws + period: 300s + metricsets: + - lambda + # This module uses the aws cloudwatch metricset, all + # the options for this metricset are also available here. +---- + +[float] +=== Metrics +Please see more details for each metric in +https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html[lambda-cloudwatch-metric]. + +|=== +|Metric Name|Statistic Method +|Invocations | Average +|Errors | Average +|DeadLetterErrors | Average +|Duration | Average +|Throttles | Average +|IteratorAge | Average +|ConcurrentExecutions | Average +|UnreservedConcurrentExecutions | Average +|ProvisionedConcurrentExecutions | Maximum +|ProvisionedConcurrencyInvocations | Sum +|ProvisionedConcurrencySpilloverInvocations | Sum +|ProvisionedConcurrencyUtilization | Maximum +|=== diff --git a/x-pack/metricbeat/module/aws/lambda/_meta/fields.yml b/x-pack/metricbeat/module/aws/lambda/_meta/fields.yml new file mode 100644 index 000000000000..1cebeff318f2 --- /dev/null +++ b/x-pack/metricbeat/module/aws/lambda/_meta/fields.yml @@ -0,0 +1,6 @@ +- name: lambda + type: group + description: > + `lambda` contains the metrics that were scraped from AWS CloudWatch which contains monitoring metrics sent by AWS Lambda. + release: beta + fields: diff --git a/x-pack/metricbeat/module/aws/lambda/lambda_integration_test.go b/x-pack/metricbeat/module/aws/lambda/lambda_integration_test.go new file mode 100644 index 000000000000..247228254b68 --- /dev/null +++ b/x-pack/metricbeat/module/aws/lambda/lambda_integration_test.go @@ -0,0 +1,24 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// +build integration + +package lambda + +import ( + "testing" + + mbtest "github.com/elastic/beats/metricbeat/mb/testing" + "github.com/elastic/beats/x-pack/metricbeat/module/aws/mtest" +) + +func TestData(t *testing.T) { + config, info := mtest.GetConfigForTest("lambda", "300s") + if info != "" { + t.Skip("Skipping TestData: " + info) + } + + metricSet := mbtest.NewFetcher(t, config) + metricSet.WriteEvents(t, "/") +} diff --git a/x-pack/metricbeat/module/aws/lambda/lambda_test.go b/x-pack/metricbeat/module/aws/lambda/lambda_test.go new file mode 100644 index 000000000000..6b0c7baf2a94 --- /dev/null +++ b/x-pack/metricbeat/module/aws/lambda/lambda_test.go @@ -0,0 +1,21 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +package lambda + +import ( + "os" + + "github.com/elastic/beats/metricbeat/mb" + + // Register input module and metricset + _ "github.com/elastic/beats/x-pack/metricbeat/module/aws" + _ "github.com/elastic/beats/x-pack/metricbeat/module/aws/cloudwatch" +) + +func init() { + // To be moved to some kind of helper + os.Setenv("BEAT_STRICT_PERMS", "false") + mb.Registry.SetSecondarySource(mb.NewLightModulesSource("../../../module")) +} diff --git a/x-pack/metricbeat/module/aws/lambda/manifest.yml b/x-pack/metricbeat/module/aws/lambda/manifest.yml new file mode 100644 index 000000000000..71fd0b7ef1cd --- /dev/null +++ b/x-pack/metricbeat/module/aws/lambda/manifest.yml @@ -0,0 +1,20 @@ +default: true +input: + module: aws + metricset: cloudwatch + defaults: + metrics: + - namespace: AWS/Lambda + statistic: ["Average"] + name: ["Invocations", "Errors", "DeadLetterErrors", "Duration", + "Throttles", "IteratorAge", "ConcurrentExecutions", + "UnreservedConcurrentExecutions"] + tags.resource_type_filter: lambda + - namespace: AWS/Lambda + statistic: ["Maximum"] + name: ["ProvisionedConcurrentExecutions", "ProvisionedConcurrencyUtilization"] + tags.resource_type_filter: lambda + - namespace: AWS/Lambda + statistic: ["Sum"] + name: ["ProvisionedConcurrencyInvocations", "ProvisionedConcurrencySpilloverInvocations"] + tags.resource_type_filter: lambda diff --git a/x-pack/metricbeat/module/aws/module.yml b/x-pack/metricbeat/module/aws/module.yml index 5e3d60759d2f..d750f2bedd5a 100644 --- a/x-pack/metricbeat/module/aws/module.yml +++ b/x-pack/metricbeat/module/aws/module.yml @@ -5,4 +5,5 @@ metricsets: - usage - billing - sns + - lambda - dynamodb diff --git a/x-pack/metricbeat/module/ibmmq/_meta/Dockerfile b/x-pack/metricbeat/module/ibmmq/_meta/Dockerfile new file mode 100644 index 000000000000..f95d8e8c0d8c --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/_meta/Dockerfile @@ -0,0 +1,11 @@ +ARG IBMMQ_VERSION + +FROM ibmcom/mq:${IBMMQ_VERSION} + +ENV IBMMQ_METRICS_REST_PORT=9157 + +ENV LICENSE=accept +ENV MQ_QMGR_NAME=QM1 +ENV MQ_ENABLE_METRICS=true + +HEALTHCHECK --interval=1s --retries=90 CMD curl -s --fail http://127.0.0.1:${IBMMQ_METRICS_REST_PORT}/metrics | grep -q "ibmmq_qmgr_commit_total" diff --git a/x-pack/metricbeat/module/ibmmq/_meta/config.yml b/x-pack/metricbeat/module/ibmmq/_meta/config.yml new file mode 100644 index 000000000000..2f8973d97302 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/_meta/config.yml @@ -0,0 +1,28 @@ +- module: ibmmq + metricsets: ['qmgr'] + period: 10s + hosts: ['localhost:9157'] + + # This module uses the Prometheus collector metricset, all + # the options for this metricset are also available here. + metrics_path: /metrics + + # The custom processor is responsible for filtering Prometheus metrics + # not stricly related to the IBM MQ domain, e.g. system load, process, + # metrics HTTP server. + processors: + - script: + lang: javascript + source: > + function process(event) { + var metrics = event.Get("prometheus.metrics"); + Object.keys(metrics).forEach(function(key) { + if (!(key.match(/^ibmmq_.*$/))) { + event.Delete("prometheus.metrics." + key); + } + }); + metrics = event.Get("prometheus.metrics"); + if (Object.keys(metrics).length == 0) { + event.Cancel(); + } + } diff --git a/x-pack/metricbeat/module/ibmmq/_meta/docs.asciidoc b/x-pack/metricbeat/module/ibmmq/_meta/docs.asciidoc new file mode 100644 index 000000000000..a031924fbaf8 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/_meta/docs.asciidoc @@ -0,0 +1,28 @@ +This module periodically fetches metrics from a containerized distribution of IBM MQ. + +[float] +=== Compatibility + +The ibmmq `qmgr` metricset is compatible with a containerized distribution of IBM MQ (since version 9.1.0). +The Docker image starts the `runmqserver` process, which spawns the HTTP server exposing metrics in Prometheus +format ([source code](https://github.com/ibm-messaging/mq-container/blob/9.1.0/internal/metrics/metrics.go)). + +The Docker container lifecycle, including metrics collection, has been described in the [Internals](https://github.com/ibm-messaging/mq-container/blob/9.1.0/docs/internals.md) +document. + +The image provides an option to easily enable metrics exporter using an environment +variable: + +`MQ_ENABLE_METRICS` - Set this to `true` to generate Prometheus metrics for the Queue Manager. + +[float] +=== Dashboard + +The ibmmq module includes predefined dashboards with overview information +of the monitored Queue Manager, including subscriptions, calls and messages. + +image::./images/metricbeat-ibmmq-calls.png[] + +image::./images/metricbeat-ibmmq-messages.png[] + +image::./images/metricbeat-ibmmq-subscriptions.png[] diff --git a/x-pack/metricbeat/module/ibmmq/_meta/fields.yml b/x-pack/metricbeat/module/ibmmq/_meta/fields.yml new file mode 100644 index 000000000000..c19c63bcdb04 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/_meta/fields.yml @@ -0,0 +1,10 @@ +- key: ibmmq + title: 'IBM MQ' + release: beta + description: > + IBM MQ module + settings: ["http"] + fields: + - name: ibmmq + type: group + fields: diff --git a/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-calls-overview.json b/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-calls-overview.json new file mode 100644 index 000000000000..68dc16e30626 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-calls-overview.json @@ -0,0 +1,1219 @@ +{ + "objects": [ + { + "attributes": { + "description": "The dashboard presents metric data describing IBM MQ calls, collected by a queue manager.", + "hits": 0, + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "optionsJSON": { + "hidePanelTitles": false, + "useMargins": true + }, + "panelsJSON": [ + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "68140594-23bf-4e1e-a062-19b21e557e1a", + "w": 16, + "x": 0, + "y": 0 + }, + "panelIndex": "68140594-23bf-4e1e-a062-19b21e557e1a", + "panelRefName": "panel_0", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "2bb94f86-2fa8-4e3e-b91d-9838a29b9674", + "w": 16, + "x": 16, + "y": 0 + }, + "panelIndex": "2bb94f86-2fa8-4e3e-b91d-9838a29b9674", + "panelRefName": "panel_1", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "0b68733f-6f86-4686-9580-1354f5d6bc4d", + "w": 16, + "x": 32, + "y": 0 + }, + "panelIndex": "0b68733f-6f86-4686-9580-1354f5d6bc4d", + "panelRefName": "panel_2", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "0423a3f2-8f1f-4402-842b-9423008ac5c1", + "w": 16, + "x": 0, + "y": 12 + }, + "panelIndex": "0423a3f2-8f1f-4402-842b-9423008ac5c1", + "panelRefName": "panel_3", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "6936c053-8168-4eb9-9964-fc0e892b9130", + "w": 16, + "x": 16, + "y": 12 + }, + "panelIndex": "6936c053-8168-4eb9-9964-fc0e892b9130", + "panelRefName": "panel_4", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "084602cd-6b17-4f8f-97a8-c33ac2bafb14", + "w": 16, + "x": 32, + "y": 12 + }, + "panelIndex": "084602cd-6b17-4f8f-97a8-c33ac2bafb14", + "panelRefName": "panel_5", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "50a75e9d-e345-45c7-93fb-54e29d0863f2", + "w": 16, + "x": 0, + "y": 24 + }, + "panelIndex": "50a75e9d-e345-45c7-93fb-54e29d0863f2", + "panelRefName": "panel_6", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "9cae147d-66d9-4bff-b916-f3b82adc07be", + "w": 16, + "x": 16, + "y": 24 + }, + "panelIndex": "9cae147d-66d9-4bff-b916-f3b82adc07be", + "panelRefName": "panel_7", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "fc84cd97-80a9-406d-ab2b-c1d9ce5dca72", + "w": 16, + "x": 32, + "y": 24 + }, + "panelIndex": "fc84cd97-80a9-406d-ab2b-c1d9ce5dca72", + "panelRefName": "panel_8", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "d8c19a6d-a25b-4950-9ef4-6a15a894f725", + "w": 16, + "x": 0, + "y": 36 + }, + "panelIndex": "d8c19a6d-a25b-4950-9ef4-6a15a894f725", + "panelRefName": "panel_9", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "d76eb9f9-2198-475b-a058-7204244d5597", + "w": 16, + "x": 16, + "y": 36 + }, + "panelIndex": "d76eb9f9-2198-475b-a058-7204244d5597", + "panelRefName": "panel_10", + "version": "7.4.0" + } + ], + "timeRestore": false, + "title": "[Metricbeat IBM MQ] Calls Overview", + "version": 1 + }, + "id": "fc5512c0-36d1-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "dashboard": "7.3.0" + }, + "references": [ + { + "id": "07262080-36d3-11ea-9f7a-097fe7ab3ddd", + "name": "panel_0", + "type": "visualization" + }, + { + "id": "1dba2700-36de-11ea-9f7a-097fe7ab3ddd", + "name": "panel_1", + "type": "visualization" + }, + { + "id": "2fcbdab0-36de-11ea-9f7a-097fe7ab3ddd", + "name": "panel_2", + "type": "visualization" + }, + { + "id": "d781db00-36df-11ea-9f7a-097fe7ab3ddd", + "name": "panel_3", + "type": "visualization" + }, + { + "id": "fd0e16a0-36de-11ea-9f7a-097fe7ab3ddd", + "name": "panel_4", + "type": "visualization" + }, + { + "id": "aa90ec20-36e0-11ea-9f7a-097fe7ab3ddd", + "name": "panel_5", + "type": "visualization" + }, + { + "id": "fd0e16a0-36de-11ea-9f7a-097fe7ab3ddd", + "name": "panel_6", + "type": "visualization" + }, + { + "id": "56b63f60-36e0-11ea-9f7a-097fe7ab3ddd", + "name": "panel_7", + "type": "visualization" + }, + { + "id": "74874de0-36e0-11ea-9f7a-097fe7ab3ddd", + "name": "panel_8", + "type": "visualization" + }, + { + "id": "92bf3480-36e0-11ea-9f7a-097fe7ab3ddd", + "name": "panel_9", + "type": "visualization" + }, + { + "id": "c4be1ff0-36e0-11ea-9f7a-097fe7ab3ddd", + "name": "panel_10", + "type": "visualization" + } + ], + "type": "dashboard", + "updated_at": "2020-01-14T18:46:51.094Z", + "version": "WzYyLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQCB calls succeeded/failed [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqcb_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + }, + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "a8f2add0-36d2-11ea-8b7d-bfeb3bd2cf33", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_mqcb_total", + "id": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "max" + }, + { + "field": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "id": "bb30c8b0-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQCB calls succeeded/failed [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "07262080-36d3-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T13:37:37.416Z", + "version": "WzI1LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQCLOSE calls succeeded/failed [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqclose_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + }, + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "a8f2add0-36d2-11ea-8b7d-bfeb3bd2cf33", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_mqclose_total", + "id": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "max" + }, + { + "field": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "id": "bb30c8b0-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQCLOSE calls succeeded/failed [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "1dba2700-36de-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T14:56:59.760Z", + "version": "WzI2LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQCONN/MQCONNX calls succeeded/failed [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqconn_mqconnx_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + }, + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "a8f2add0-36d2-11ea-8b7d-bfeb3bd2cf33", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqconn_mqconnx_total", + "id": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "max" + }, + { + "field": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "id": "bb30c8b0-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQCONN/MQCONNX calls succeeded/failed [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "2fcbdab0-36de-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T14:57:30.075Z", + "version": "WzI3LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQDISC calls succeeded [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqdisc_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQDISC calls succeeded [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "d781db00-36df-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T15:09:20.944Z", + "version": "WzI5LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQCTL calls succeeded [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqctl_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQCTL calls succeeded [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "fd0e16a0-36de-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T15:03:14.442Z", + "version": "WzI4LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQSTAT calls succeeded [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqstat_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQSTAT calls succeeded [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "aa90ec20-36e0-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T18:46:20.735Z", + "version": "WzYxLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQOPEN calls succeeded/failed [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqopen_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + }, + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "a8f2add0-36d2-11ea-8b7d-bfeb3bd2cf33", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_mqopen_total", + "id": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "max" + }, + { + "field": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "id": "bb30c8b0-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQOPEN calls succeeded/failed [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "56b63f60-36e0-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T15:13:24.521Z", + "version": "WzMxLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQINQ calls succeeded/failed [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqinq_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + }, + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "a8f2add0-36d2-11ea-8b7d-bfeb3bd2cf33", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_mqinq_total", + "id": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "max" + }, + { + "field": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "id": "bb30c8b0-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQINQ calls succeeded/failed [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "74874de0-36e0-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T15:14:01.330Z", + "version": "WzMzLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQSET calls succeeded/failed [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqset_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + }, + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "a8f2add0-36d2-11ea-8b7d-bfeb3bd2cf33", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_mqset_total", + "id": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "max" + }, + { + "field": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "id": "bb30c8b0-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQSET calls succeeded/failed [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "92bf3480-36e0-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T15:14:35.080Z", + "version": "WzM0LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "MQSUBRQ calls succeeded/failed [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_mqsubrq_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "92c00030-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + }, + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "a8f2add0-36d2-11ea-8b7d-bfeb3bd2cf33", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_mqsubrq_total", + "id": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "max" + }, + { + "field": "a8f2add1-36d2-11ea-8b7d-bfeb3bd2cf33", + "id": "bb30c8b0-36d2-11ea-8b7d-bfeb3bd2cf33", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "MQSUBRQ calls succeeded/failed [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "c4be1ff0-36e0-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T15:15:58.959Z", + "version": "WzM2LDJd" + } + ], + "version": "7.4.0" +} diff --git a/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-messages-overview.json b/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-messages-overview.json new file mode 100644 index 000000000000..a57ad13dc88c --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-messages-overview.json @@ -0,0 +1,1250 @@ +{ + "objects": [ + { + "attributes": { + "description": "The dashboard presents metric data describing IBM MQ persistent and non-persistent messages. Metric data are collected by a queue manager.", + "hits": 0, + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "optionsJSON": { + "hidePanelTitles": false, + "useMargins": true + }, + "panelsJSON": [ + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "31635dc4-663e-4ad1-adae-eb96687c7810", + "w": 16, + "x": 0, + "y": 0 + }, + "panelIndex": "31635dc4-663e-4ad1-adae-eb96687c7810", + "panelRefName": "panel_0", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "5452998b-5149-4ac6-93df-b3fccab74f58", + "w": 16, + "x": 16, + "y": 0 + }, + "panelIndex": "5452998b-5149-4ac6-93df-b3fccab74f58", + "panelRefName": "panel_1", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "0e58849b-8742-4ed4-aae2-33ca19553ac2", + "w": 16, + "x": 32, + "y": 0 + }, + "panelIndex": "0e58849b-8742-4ed4-aae2-33ca19553ac2", + "panelRefName": "panel_2", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "45cd1f23-ef32-4785-b8c0-dcd4cf4c0c1f", + "w": 16, + "x": 0, + "y": 12 + }, + "panelIndex": "45cd1f23-ef32-4785-b8c0-dcd4cf4c0c1f", + "panelRefName": "panel_3", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "2fbdb686-f624-4b2d-a26d-4e7f70e8d902", + "w": 16, + "x": 16, + "y": 12 + }, + "panelIndex": "2fbdb686-f624-4b2d-a26d-4e7f70e8d902", + "panelRefName": "panel_4", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "355b12f6-56cb-4b8c-8498-b379d3e7d8b0", + "w": 16, + "x": 32, + "y": 12 + }, + "panelIndex": "355b12f6-56cb-4b8c-8498-b379d3e7d8b0", + "panelRefName": "panel_5", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "c1eed75c-610c-4741-b384-de866f30b79b", + "w": 16, + "x": 0, + "y": 24 + }, + "panelIndex": "c1eed75c-610c-4741-b384-de866f30b79b", + "panelRefName": "panel_6", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "78bd7680-0f3f-4d3f-994b-eeb58ef0a340", + "w": 16, + "x": 16, + "y": 24 + }, + "panelIndex": "78bd7680-0f3f-4d3f-994b-eeb58ef0a340", + "panelRefName": "panel_7", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "6edef0c3-4c5f-4d0a-8e58-076cb5249ca2", + "w": 16, + "x": 32, + "y": 24 + }, + "panelIndex": "6edef0c3-4c5f-4d0a-8e58-076cb5249ca2", + "panelRefName": "panel_8", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "0ecb7983-d4f9-453d-ade4-d02dfa6b6c72", + "w": 16, + "x": 0, + "y": 36 + }, + "panelIndex": "0ecb7983-d4f9-453d-ade4-d02dfa6b6c72", + "panelRefName": "panel_9", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "1c8071e7-c89a-45b1-aae6-31471939b73c", + "w": 16, + "x": 32, + "y": 36 + }, + "panelIndex": "1c8071e7-c89a-45b1-aae6-31471939b73c", + "panelRefName": "panel_10", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "e27955d6-ce96-48b9-b9d0-04f4d61a757f", + "w": 16, + "x": 16, + "y": 36 + }, + "panelIndex": "e27955d6-ce96-48b9-b9d0-04f4d61a757f", + "panelRefName": "panel_11", + "version": "7.4.0" + } + ], + "timeRestore": false, + "title": "[Metricbeat IBM MQ] Messages Overview", + "version": 1 + }, + "id": "d2112e90-36ea-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "dashboard": "7.3.0" + }, + "references": [ + { + "id": "49abed00-36eb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_0", + "type": "visualization" + }, + { + "id": "0abb72e0-36ec-11ea-9f7a-097fe7ab3ddd", + "name": "panel_1", + "type": "visualization" + }, + { + "id": "195b5860-36ec-11ea-9f7a-097fe7ab3ddd", + "name": "panel_2", + "type": "visualization" + }, + { + "id": "60b5a440-36ec-11ea-9f7a-097fe7ab3ddd", + "name": "panel_3", + "type": "visualization" + }, + { + "id": "e98d7660-36ee-11ea-9f7a-097fe7ab3ddd", + "name": "panel_4", + "type": "visualization" + }, + { + "id": "d82919b0-36ee-11ea-9f7a-097fe7ab3ddd", + "name": "panel_5", + "type": "visualization" + }, + { + "id": "23c5f140-36ef-11ea-9f7a-097fe7ab3ddd", + "name": "panel_6", + "type": "visualization" + }, + { + "id": "3ed28890-36ef-11ea-9f7a-097fe7ab3ddd", + "name": "panel_7", + "type": "visualization" + }, + { + "id": "58abd000-36ef-11ea-9f7a-097fe7ab3ddd", + "name": "panel_8", + "type": "visualization" + }, + { + "id": "67eeac40-36ef-11ea-9f7a-097fe7ab3ddd", + "name": "panel_9", + "type": "visualization" + }, + { + "id": "96d27500-36ef-11ea-9f7a-097fe7ab3ddd", + "name": "panel_10", + "type": "visualization" + }, + { + "id": "855debb0-36ef-11ea-9f7a-097fe7ab3ddd", + "name": "panel_11", + "type": "visualization" + } + ], + "type": "dashboard", + "updated_at": "2020-01-14T17:09:00.139Z", + "version": "WzYwLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Message commits [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_commit_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Message commits [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "49abed00-36eb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T16:36:10.861Z", + "version": "WzQwLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Expired messages [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_expired_message_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Expired messages [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "0abb72e0-36ec-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T16:36:40.846Z", + "version": "WzQxLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Purged queue [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_purged_queue_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Purged queue [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "195b5860-36ec-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T16:37:05.382Z", + "version": "WzQyLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Failed browse count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_browse_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Failed browse count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "60b5a440-36ec-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T17:02:57.717Z", + "version": "WzU5LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Non-persistent message MQPUT1 [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_non_persistent_message_mqput1_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Non-persistent message MQPUT1 [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "e98d7660-36ee-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T16:58:23.452Z", + "version": "WzQ5LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Non-persistent message MQPUT [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_non_persistent_message_mqput_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Non-persistent message MQPUT [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "d82919b0-36ee-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T16:57:47.695Z", + "version": "WzQ3LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Non-persistent message browse count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_non_persistent_message_browse_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Non-persistent message browse count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "23c5f140-36ef-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T16:58:51.348Z", + "version": "WzUwLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Non-persistent message destructive get count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_non_persistent_message_destructive_get_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Non-persistent message destructive get count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "3ed28890-36ef-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T16:59:36.729Z", + "version": "WzUxLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Persistent message MQPUT count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_persistent_message_mqput_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Persistent message MQPUT count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "58abd000-36ef-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T17:01:06.699Z", + "version": "WzU1LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Persistent message MQPUT1 count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_persistent_message_mqput1_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Persistent message MQPUT1 count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "67eeac40-36ef-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T17:00:53.118Z", + "version": "WzU0LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Persistent message destructive get count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_persistent_message_destructive_get_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Persistent message destructive get count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "96d27500-36ef-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T17:02:14.424Z", + "version": "WzU4LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Persistent message browse count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_persistent_message_browse_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Persistent message browse count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "855debb0-36ef-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T17:01:35.083Z", + "version": "WzU2LDJd" + } + ], + "version": "7.4.0" +} diff --git a/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-subscriptions-overview.json b/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-subscriptions-overview.json new file mode 100644 index 000000000000..2698136c631c --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/_meta/kibana/7/dashboard/Metricbeat-ibmmq-subscriptions-overview.json @@ -0,0 +1,752 @@ +{ + "objects": [ + { + "attributes": { + "description": "The dashboard presents metric data describing IBM MQ subscriptions. Metrics show statistics of actions performed on durable and non-durable subscriptions, collected by a queue manager.", + "hits": 0, + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "optionsJSON": { + "hidePanelTitles": false, + "useMargins": true + }, + "panelsJSON": [ + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "e17294e6-0911-47dc-b28b-de87507924b5", + "w": 16, + "x": 0, + "y": 0 + }, + "panelIndex": "e17294e6-0911-47dc-b28b-de87507924b5", + "panelRefName": "panel_0", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "040d5750-fa77-45c6-82c1-26fc6f3859a6", + "w": 16, + "x": 16, + "y": 0 + }, + "panelIndex": "040d5750-fa77-45c6-82c1-26fc6f3859a6", + "panelRefName": "panel_1", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "fe5933aa-17b4-455e-8ab4-88d1f50ba73a", + "w": 16, + "x": 32, + "y": 0 + }, + "panelIndex": "fe5933aa-17b4-455e-8ab4-88d1f50ba73a", + "panelRefName": "panel_2", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "87a5c31a-6456-4839-a9ec-24802f51889d", + "w": 16, + "x": 0, + "y": 12 + }, + "panelIndex": "87a5c31a-6456-4839-a9ec-24802f51889d", + "panelRefName": "panel_3", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "1af1ab03-5cfd-4495-9d50-7dd77f43f1a4", + "w": 16, + "x": 16, + "y": 12 + }, + "panelIndex": "1af1ab03-5cfd-4495-9d50-7dd77f43f1a4", + "panelRefName": "panel_4", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "a9a53a87-592f-480f-997d-73fcb1843167", + "w": 16, + "x": 32, + "y": 12 + }, + "panelIndex": "a9a53a87-592f-480f-997d-73fcb1843167", + "panelRefName": "panel_5", + "version": "7.4.0" + }, + { + "embeddableConfig": {}, + "gridData": { + "h": 12, + "i": "38525462-b0f6-4cc9-a052-6e5f66f1cba3", + "w": 16, + "x": 0, + "y": 24 + }, + "panelIndex": "38525462-b0f6-4cc9-a052-6e5f66f1cba3", + "panelRefName": "panel_6", + "version": "7.4.0" + } + ], + "timeRestore": false, + "title": "[Metricbeat IBM MQ] Subscriptions Overview", + "version": 1 + }, + "id": "8f788c70-36c9-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "dashboard": "7.3.0" + }, + "references": [ + { + "id": "b455bc00-36cb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_0", + "type": "visualization" + }, + { + "id": "bdf17380-36cb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_1", + "type": "visualization" + }, + { + "id": "9939e270-36cb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_2", + "type": "visualization" + }, + { + "id": "89984460-36cb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_3", + "type": "visualization" + }, + { + "id": "908afbf0-36cb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_4", + "type": "visualization" + }, + { + "id": "d8dbdcd0-36cb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_5", + "type": "visualization" + }, + { + "id": "3901ed30-36cb-11ea-9f7a-097fe7ab3ddd", + "name": "panel_6", + "type": "visualization" + } + ], + "type": "dashboard", + "updated_at": "2020-01-14T12:58:08.915Z", + "version": "WzIzLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Create non-durable subscription [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_non_durable_subscription_create_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "b5619140-36cc-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Create non-durable subscription [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "b455bc00-36cb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T12:52:51.700Z", + "version": "WzE2LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Delete non-durable subscription [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_non_durable_subscription_delete_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "cd9fed60-36cc-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Delete non-durable subscription [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "bdf17380-36cb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T12:53:21.649Z", + "version": "WzE3LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Resume durable subscription [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_durable_subscription_resume_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "e0ece030-36cc-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Resume durable subscription [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "9939e270-36cb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T12:53:56.259Z", + "version": "WzE4LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Create durable subscription [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_durable_subscription_create_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "alpha": 0.3, + "beta": 0.1, + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "gamma": 0.3, + "id": "f9af6070-36cc-11ea-b7bc-e7f346d59677", + "model_type": "simple", + "multiplicative": true, + "period": 1, + "type": "derivative", + "unit": "", + "window": 5 + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Create durable subscription [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "89984460-36cb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T12:54:34.978Z", + "version": "WzE5LDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Delete durable subscription [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_durable_subscription_delete_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "0a276150-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Delete durable subscription [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "908afbf0-36cb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T12:55:01.483Z", + "version": "WzIwLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Failed create/alter/resume subscription count [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "rgba(211,49,21,1)", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_failed_subscription_create_alter_resume_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "2809d4f0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Failed create/alter/resume subscription count [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "d8dbdcd0-36cb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T12:55:50.813Z", + "version": "WzIxLDJd" + }, + { + "attributes": { + "description": "", + "kibanaSavedObjectMeta": { + "searchSourceJSON": { + "filter": [], + "query": { + "language": "kuery", + "query": "" + } + } + }, + "title": "Alter durable subscription [Metricbeat IBM MQ]", + "uiStateJSON": {}, + "version": 1, + "visState": { + "aggs": [], + "params": { + "axis_formatter": "number", + "axis_min": "0", + "axis_position": "left", + "axis_scale": "normal", + "background_color_rules": [ + { + "id": "6fa6af70-36ca-11ea-b7bc-e7f346d59677" + } + ], + "default_index_pattern": "metricbeat-*", + "default_timefield": "@timestamp", + "id": "61ca57f0-469d-11e7-af02-69e470af7417", + "index_pattern": "", + "interval": "", + "isModelInvalid": false, + "legend_position": "bottom", + "series": [ + { + "axis_position": "right", + "chart_type": "line", + "color": "#68BC00", + "fill": 0.5, + "formatter": "number", + "id": "61ca57f1-469d-11e7-af02-69e470af7417", + "label": "", + "line_width": 1, + "metrics": [ + { + "field": "prometheus.metrics.ibmmq_qmgr_durable_subscription_alter_total", + "id": "61ca57f2-469d-11e7-af02-69e470af7417", + "type": "max" + }, + { + "field": "61ca57f2-469d-11e7-af02-69e470af7417", + "id": "3b91ade0-36cd-11ea-b7bc-e7f346d59677", + "type": "derivative", + "unit": "" + } + ], + "point_size": 1, + "separate_axis": 0, + "split_mode": "terms", + "stacked": "none", + "terms_field": "prometheus.labels.qmgr", + "type": "timeseries" + } + ], + "show_grid": 1, + "show_legend": 1, + "time_field": "", + "type": "timeseries" + }, + "title": "Alter durable subscription [Metricbeat IBM MQ]", + "type": "metrics" + } + }, + "id": "3901ed30-36cb-11ea-9f7a-097fe7ab3ddd", + "migrationVersion": { + "visualization": "7.3.1" + }, + "references": [], + "type": "visualization", + "updated_at": "2020-01-14T12:56:29.734Z", + "version": "WzIyLDJd" + } + ], + "version": "7.4.0" +} diff --git a/x-pack/metricbeat/module/ibmmq/docker-compose.yml b/x-pack/metricbeat/module/ibmmq/docker-compose.yml new file mode 100644 index 000000000000..0a0230dc629e --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/docker-compose.yml @@ -0,0 +1,11 @@ +version: '2.3' + +services: + ibmmq: + image: docker.elastic.co/integrations-ci/beats-ibmmq:${IBMMQ_VERSION:-9.1.4.0-r1-amd64}-1 + build: + context: ./_meta + args: + IBMMQ_VERSION: ${IBMMQ_VERSION:-9.1.4.0-r1-amd64} + ports: + - 9157 diff --git a/x-pack/metricbeat/module/ibmmq/fields.go b/x-pack/metricbeat/module/ibmmq/fields.go new file mode 100644 index 000000000000..734332594b92 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/fields.go @@ -0,0 +1,23 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// Code generated by beats/dev-tools/cmd/asset/asset.go - DO NOT EDIT. + +package ibmmq + +import ( + "github.com/elastic/beats/libbeat/asset" +) + +func init() { + if err := asset.SetFields("metricbeat", "ibmmq", asset.ModuleFieldsPri, AssetIbmmq); err != nil { + panic(err) + } +} + +// AssetIbmmq returns asset data. +// This is the base64 encoded gzipped contents of module/ibmmq. +func AssetIbmmq() string { + return "eJxUzjEOgkAQheF+T/GHhooLTGFhZ0FhbSxARty4C+vuUHB7o5ioU07el/ca7roKvo/x4cC8BRXqw76lPdYOsgbtigq9Wudg0HLJPpmfJ2HnALYwcR6WoA6KmvlpLMKpupml6uzg6jUMRd6gYeqifltfZ2tSYczzkj6fX7Gp/y3PAAAA//8gwjWY" +} diff --git a/x-pack/metricbeat/module/ibmmq/module.yml b/x-pack/metricbeat/module/ibmmq/module.yml new file mode 100644 index 000000000000..96c8a4d62336 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/module.yml @@ -0,0 +1,3 @@ +name: ibmmq +metricsets: +- qmgr diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/_meta/data.json b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/data.json new file mode 100644 index 000000000000..58cdc88f30cd --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/data.json @@ -0,0 +1,114 @@ +{ + "@timestamp": "2020-01-13T12:58:04.412Z", + "@metadata": { + "beat": "metricbeat", + "type": "_doc", + "version": "8.0.0" + }, + "service": { + "address": "localhost:9157", + "type": "ibmmq" + }, + "agent": { + "version": "8.0.0", + "type": "metricbeat", + "ephemeral_id": "752a92a2-5881-49b6-a6c8-a2848dd2a074", + "hostname": "MacBook-Elastic.local", + "id": "d662b58a-49c9-4241-aaaf-c5489341138c" + }, + "ecs": { + "version": "1.4.0" + }, + "host": { + "hostname": "MacBook-Elastic.local", + "architecture": "x86_64", + "os": { + "platform": "darwin", + "version": "10.14.6", + "family": "darwin", + "name": "Mac OS X", + "kernel": "18.7.0", + "build": "18G95" + }, + "name": "MacBook-Elastic.local", + "id": "24F065F8-4274-521D-8DD5-5D27557E15B4" + }, + "prometheus": { + "labels": { + "qmgr": "QM1", + "instance": "localhost:9157", + "job": "ibmmq" + }, + "metrics": { + "ibmmq_qmgr_mqctl_total": 0, + "ibmmq_qmgr_mqcb_total": 0, + "ibmmq_qmgr_durable_subscription_delete_total": 0, + "ibmmq_qmgr_non_persistent_message_browse_bytes_total": 0, + "ibmmq_qmgr_failed_mqset_total": 0, + "ibmmq_qmgr_failed_subscription_delete_total": 0, + "ibmmq_qmgr_destructive_get_total": 1772, + "ibmmq_qmgr_mqput_mqput1_bytes_total": 659084, + "ibmmq_qmgr_failed_mqsubrq_total": 0, + "ibmmq_qmgr_topic_put_bytes_total": 473772, + "ibmmq_qmgr_topic_mqput_mqput1_total": 1761, + "ibmmq_qmgr_persistent_topic_mqput_mqput1_total": 0, + "ibmmq_qmgr_failed_mqput_total": 0, + "ibmmq_qmgr_non_durable_subscription_delete_total": 0, + "ibmmq_qmgr_non_persistent_message_browse_total": 0, + "ibmmq_qmgr_expired_message_total": 0, + "ibmmq_qmgr_non_durable_subscription_create_total": 0, + "ibmmq_qmgr_non_persistent_message_destructive_get_total": 1772, + "ibmmq_qmgr_failed_mqcb_total": 0, + "ibmmq_qmgr_commit_total": 0, + "ibmmq_qmgr_mqset_total": 0, + "ibmmq_qmgr_mqsubrq_total": 0, + "ibmmq_qmgr_mqopen_total": 0, + "ibmmq_qmgr_durable_subscription_resume_total": 0, + "ibmmq_qmgr_non_persistent_message_mqput1_total": 0, + "ibmmq_qmgr_failed_browse_total": 0, + "ibmmq_qmgr_non_persistent_topic_mqput_mqput1_total": 1761, + "ibmmq_qmgr_mqconn_mqconnx_total": 0, + "ibmmq_qmgr_mqstat_total": 0, + "ibmmq_qmgr_log_logical_written_bytes_total": 0, + "ibmmq_qmgr_log_physical_written_bytes_total": 0, + "ibmmq_qmgr_failed_mqput1_total": 0, + "ibmmq_qmgr_published_to_subscribers_bytes_total": 473772, + "ibmmq_qmgr_failed_mqclose_total": 0, + "ibmmq_qmgr_failed_mqget_total": 1481, + "ibmmq_qmgr_rollback_total": 0, + "ibmmq_qmgr_failed_subscription_create_alter_resume_total": 0, + "ibmmq_qmgr_failed_topic_mqput_mqput1_total": 0, + "ibmmq_qmgr_mqdisc_total": 0, + "ibmmq_qmgr_failed_mqinq_total": 0, + "ibmmq_qmgr_non_persistent_message_mqput_total": 1761, + "ibmmq_qmgr_durable_subscription_create_total": 0, + "ibmmq_qmgr_failed_mqopen_total": 0, + "ibmmq_qmgr_failed_mqconn_mqconnx_total": 0, + "ibmmq_qmgr_persistent_message_put_bytes_total": 0, + "ibmmq_qmgr_non_persistent_message_get_bytes_total": 663212, + "ibmmq_qmgr_durable_subscription_alter_total": 0, + "ibmmq_qmgr_persistent_message_browse_total": 0, + "ibmmq_qmgr_mqput_mqput1_total": 1761, + "ibmmq_qmgr_published_to_subscribers_message_total": 1761, + "ibmmq_qmgr_destructive_get_bytes_total": 663212, + "ibmmq_qmgr_mqclose_total": 2, + "ibmmq_qmgr_persistent_message_browse_bytes_total": 0, + "ibmmq_qmgr_persistent_message_get_bytes_total": 0, + "ibmmq_qmgr_persistent_message_mqput1_total": 0, + "ibmmq_qmgr_purged_queue_total": 0, + "ibmmq_qmgr_persistent_message_destructive_get_total": 0, + "ibmmq_qmgr_persistent_message_mqput_total": 0, + "ibmmq_qmgr_mqinq_total": 634, + "ibmmq_qmgr_non_persistent_message_put_bytes_total": 659084 + } + }, + "event": { + "dataset": "ibmmq.qmgr", + "module": "ibmmq", + "duration": 4421890 + }, + "metricset": { + "name": "qmgr", + "period": 10000 + } +} diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/_meta/docs.asciidoc b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/docs.asciidoc new file mode 100644 index 000000000000..c78215fc43b8 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/docs.asciidoc @@ -0,0 +1,3 @@ +This is the `qmgr` metricset of the IBM MQ module. It collects status information for the Queue Manager. +The manager is a system program that is responsible for maintaining the queues and ensuring that the messages +in the queues reach their destination. diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/_meta/fields.yml b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/fields.yml new file mode 100644 index 000000000000..8033a27f5ac5 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/fields.yml @@ -0,0 +1 @@ +- release: beta diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/config.yml b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/config.yml new file mode 100644 index 000000000000..0301667e9402 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/config.yml @@ -0,0 +1,4 @@ +type: http +url: "/metrics" +suffix: plain +remove_fields_from_comparison: ["prometheus.labels.instance"] diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/ibmmq-status.9.1.4.0-r1-amd64.plain b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/ibmmq-status.9.1.4.0-r1-amd64.plain new file mode 100644 index 000000000000..ca8bc1c9f0b6 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/ibmmq-status.9.1.4.0-r1-amd64.plain @@ -0,0 +1,180 @@ +# HELP ibmmq_qmgr_commit_total Commit count +# TYPE ibmmq_qmgr_commit_total counter +ibmmq_qmgr_commit_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_destructive_get_bytes_total Interval total destructive get - byte count +# TYPE ibmmq_qmgr_destructive_get_bytes_total counter +ibmmq_qmgr_destructive_get_bytes_total{qmgr="QM1"} 7812 +# HELP ibmmq_qmgr_destructive_get_total Interval total destructive get- count +# TYPE ibmmq_qmgr_destructive_get_total counter +ibmmq_qmgr_destructive_get_total{qmgr="QM1"} 23 +# HELP ibmmq_qmgr_durable_subscription_alter_total Alter durable subscription count +# TYPE ibmmq_qmgr_durable_subscription_alter_total counter +ibmmq_qmgr_durable_subscription_alter_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_durable_subscription_create_total Create durable subscription count +# TYPE ibmmq_qmgr_durable_subscription_create_total counter +ibmmq_qmgr_durable_subscription_create_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_durable_subscription_delete_total Delete durable subscription count +# TYPE ibmmq_qmgr_durable_subscription_delete_total counter +ibmmq_qmgr_durable_subscription_delete_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_durable_subscription_resume_total Resume durable subscription count +# TYPE ibmmq_qmgr_durable_subscription_resume_total counter +ibmmq_qmgr_durable_subscription_resume_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_expired_message_total Expired message count +# TYPE ibmmq_qmgr_expired_message_total counter +ibmmq_qmgr_expired_message_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_browse_total Failed browse count +# TYPE ibmmq_qmgr_failed_browse_total counter +ibmmq_qmgr_failed_browse_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqcb_total Failed MQCB count +# TYPE ibmmq_qmgr_failed_mqcb_total counter +ibmmq_qmgr_failed_mqcb_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqclose_total Failed MQCLOSE count +# TYPE ibmmq_qmgr_failed_mqclose_total counter +ibmmq_qmgr_failed_mqclose_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqconn_mqconnx_total Failed MQCONN/MQCONNX count +# TYPE ibmmq_qmgr_failed_mqconn_mqconnx_total counter +ibmmq_qmgr_failed_mqconn_mqconnx_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqget_total Failed MQGET - count +# TYPE ibmmq_qmgr_failed_mqget_total counter +ibmmq_qmgr_failed_mqget_total{qmgr="QM1"} 16 +# HELP ibmmq_qmgr_failed_mqinq_total Failed MQINQ count +# TYPE ibmmq_qmgr_failed_mqinq_total counter +ibmmq_qmgr_failed_mqinq_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqopen_total Failed MQOPEN count +# TYPE ibmmq_qmgr_failed_mqopen_total counter +ibmmq_qmgr_failed_mqopen_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqput1_total Failed MQPUT1 count +# TYPE ibmmq_qmgr_failed_mqput1_total counter +ibmmq_qmgr_failed_mqput1_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqput_total Failed MQPUT count +# TYPE ibmmq_qmgr_failed_mqput_total counter +ibmmq_qmgr_failed_mqput_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqset_total Failed MQSET count +# TYPE ibmmq_qmgr_failed_mqset_total counter +ibmmq_qmgr_failed_mqset_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_mqsubrq_total Failed MQSUBRQ count +# TYPE ibmmq_qmgr_failed_mqsubrq_total counter +ibmmq_qmgr_failed_mqsubrq_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_subscription_create_alter_resume_total Failed create/alter/resume subscription count +# TYPE ibmmq_qmgr_failed_subscription_create_alter_resume_total counter +ibmmq_qmgr_failed_subscription_create_alter_resume_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_subscription_delete_total Subscription delete failure count +# TYPE ibmmq_qmgr_failed_subscription_delete_total counter +ibmmq_qmgr_failed_subscription_delete_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_failed_topic_mqput_mqput1_total Failed topic MQPUT/MQPUT1 count +# TYPE ibmmq_qmgr_failed_topic_mqput_mqput1_total counter +ibmmq_qmgr_failed_topic_mqput_mqput1_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_log_logical_written_bytes_total Log - logical bytes written +# TYPE ibmmq_qmgr_log_logical_written_bytes_total counter +ibmmq_qmgr_log_logical_written_bytes_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_log_physical_written_bytes_total Log - physical bytes written +# TYPE ibmmq_qmgr_log_physical_written_bytes_total counter +ibmmq_qmgr_log_physical_written_bytes_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqcb_total MQCB count +# TYPE ibmmq_qmgr_mqcb_total counter +ibmmq_qmgr_mqcb_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqclose_total MQCLOSE count +# TYPE ibmmq_qmgr_mqclose_total counter +ibmmq_qmgr_mqclose_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqconn_mqconnx_total MQCONN/MQCONNX count +# TYPE ibmmq_qmgr_mqconn_mqconnx_total counter +ibmmq_qmgr_mqconn_mqconnx_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqctl_total MQCTL count +# TYPE ibmmq_qmgr_mqctl_total counter +ibmmq_qmgr_mqctl_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqdisc_total MQDISC count +# TYPE ibmmq_qmgr_mqdisc_total counter +ibmmq_qmgr_mqdisc_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqinq_total MQINQ count +# TYPE ibmmq_qmgr_mqinq_total counter +ibmmq_qmgr_mqinq_total{qmgr="QM1"} 4 +# HELP ibmmq_qmgr_mqopen_total MQOPEN count +# TYPE ibmmq_qmgr_mqopen_total counter +ibmmq_qmgr_mqopen_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqput_mqput1_bytes_total Interval total MQPUT/MQPUT1 byte count +# TYPE ibmmq_qmgr_mqput_mqput1_bytes_total counter +ibmmq_qmgr_mqput_mqput1_bytes_total{qmgr="QM1"} 1860 +# HELP ibmmq_qmgr_mqput_mqput1_total Interval total MQPUT/MQPUT1 count +# TYPE ibmmq_qmgr_mqput_mqput1_total counter +ibmmq_qmgr_mqput_mqput1_total{qmgr="QM1"} 6 +# HELP ibmmq_qmgr_mqset_total MQSET count +# TYPE ibmmq_qmgr_mqset_total counter +ibmmq_qmgr_mqset_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqstat_total MQSTAT count +# TYPE ibmmq_qmgr_mqstat_total counter +ibmmq_qmgr_mqstat_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_mqsubrq_total MQSUBRQ count +# TYPE ibmmq_qmgr_mqsubrq_total counter +ibmmq_qmgr_mqsubrq_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_non_durable_subscription_create_total Create non-durable subscription count +# TYPE ibmmq_qmgr_non_durable_subscription_create_total counter +ibmmq_qmgr_non_durable_subscription_create_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_non_durable_subscription_delete_total Delete non-durable subscription count +# TYPE ibmmq_qmgr_non_durable_subscription_delete_total counter +ibmmq_qmgr_non_durable_subscription_delete_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_non_persistent_message_browse_bytes_total Non-persistent message browse - byte count +# TYPE ibmmq_qmgr_non_persistent_message_browse_bytes_total counter +ibmmq_qmgr_non_persistent_message_browse_bytes_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_non_persistent_message_browse_total Non-persistent message browse - count +# TYPE ibmmq_qmgr_non_persistent_message_browse_total counter +ibmmq_qmgr_non_persistent_message_browse_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_non_persistent_message_destructive_get_total Non-persistent message destructive get - count +# TYPE ibmmq_qmgr_non_persistent_message_destructive_get_total counter +ibmmq_qmgr_non_persistent_message_destructive_get_total{qmgr="QM1"} 23 +# HELP ibmmq_qmgr_non_persistent_message_get_bytes_total Got non-persistent messages - byte count +# TYPE ibmmq_qmgr_non_persistent_message_get_bytes_total counter +ibmmq_qmgr_non_persistent_message_get_bytes_total{qmgr="QM1"} 7812 +# HELP ibmmq_qmgr_non_persistent_message_mqput1_total Non-persistent message MQPUT1 count +# TYPE ibmmq_qmgr_non_persistent_message_mqput1_total counter +ibmmq_qmgr_non_persistent_message_mqput1_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_non_persistent_message_mqput_total Non-persistent message MQPUT count +# TYPE ibmmq_qmgr_non_persistent_message_mqput_total counter +ibmmq_qmgr_non_persistent_message_mqput_total{qmgr="QM1"} 6 +# HELP ibmmq_qmgr_non_persistent_message_put_bytes_total Put non-persistent messages - byte count +# TYPE ibmmq_qmgr_non_persistent_message_put_bytes_total counter +ibmmq_qmgr_non_persistent_message_put_bytes_total{qmgr="QM1"} 1860 +# HELP ibmmq_qmgr_non_persistent_topic_mqput_mqput1_total Non-persistent - topic MQPUT/MQPUT1 count +# TYPE ibmmq_qmgr_non_persistent_topic_mqput_mqput1_total counter +ibmmq_qmgr_non_persistent_topic_mqput_mqput1_total{qmgr="QM1"} 6 +# HELP ibmmq_qmgr_persistent_message_browse_bytes_total Persistent message browse - byte count +# TYPE ibmmq_qmgr_persistent_message_browse_bytes_total counter +ibmmq_qmgr_persistent_message_browse_bytes_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_persistent_message_browse_total Persistent message browse - count +# TYPE ibmmq_qmgr_persistent_message_browse_total counter +ibmmq_qmgr_persistent_message_browse_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_persistent_message_destructive_get_total Persistent message destructive get - count +# TYPE ibmmq_qmgr_persistent_message_destructive_get_total counter +ibmmq_qmgr_persistent_message_destructive_get_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_persistent_message_get_bytes_total Got persistent messages - byte count +# TYPE ibmmq_qmgr_persistent_message_get_bytes_total counter +ibmmq_qmgr_persistent_message_get_bytes_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_persistent_message_mqput1_total Persistent message MQPUT1 count +# TYPE ibmmq_qmgr_persistent_message_mqput1_total counter +ibmmq_qmgr_persistent_message_mqput1_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_persistent_message_mqput_total Persistent message MQPUT count +# TYPE ibmmq_qmgr_persistent_message_mqput_total counter +ibmmq_qmgr_persistent_message_mqput_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_persistent_message_put_bytes_total Put persistent messages - byte count +# TYPE ibmmq_qmgr_persistent_message_put_bytes_total counter +ibmmq_qmgr_persistent_message_put_bytes_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_persistent_topic_mqput_mqput1_total Persistent - topic MQPUT/MQPUT1 count +# TYPE ibmmq_qmgr_persistent_topic_mqput_mqput1_total counter +ibmmq_qmgr_persistent_topic_mqput_mqput1_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_published_to_subscribers_bytes_total Published to subscribers - byte count +# TYPE ibmmq_qmgr_published_to_subscribers_bytes_total counter +ibmmq_qmgr_published_to_subscribers_bytes_total{qmgr="QM1"} 1224 +# HELP ibmmq_qmgr_published_to_subscribers_message_total Published to subscribers - message count +# TYPE ibmmq_qmgr_published_to_subscribers_message_total counter +ibmmq_qmgr_published_to_subscribers_message_total{qmgr="QM1"} 6 +# HELP ibmmq_qmgr_purged_queue_total Purged queue count +# TYPE ibmmq_qmgr_purged_queue_total counter +ibmmq_qmgr_purged_queue_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_rollback_total Rollback count +# TYPE ibmmq_qmgr_rollback_total counter +ibmmq_qmgr_rollback_total{qmgr="QM1"} 0 +# HELP ibmmq_qmgr_topic_mqput_mqput1_total Topic MQPUT/MQPUT1 interval total +# TYPE ibmmq_qmgr_topic_mqput_mqput1_total counter +ibmmq_qmgr_topic_mqput_mqput1_total{qmgr="QM1"} 6 +# HELP ibmmq_qmgr_topic_put_bytes_total Interval total topic bytes put +# TYPE ibmmq_qmgr_topic_put_bytes_total counter +ibmmq_qmgr_topic_put_bytes_total{qmgr="QM1"} 1224 diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/ibmmq-status.9.1.4.0-r1-amd64.plain-expected.json b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/ibmmq-status.9.1.4.0-r1-amd64.plain-expected.json new file mode 100644 index 000000000000..ade0022e09e3 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/_meta/testdata/ibmmq-status.9.1.4.0-r1-amd64.plain-expected.json @@ -0,0 +1,86 @@ +[ + { + "event": { + "dataset": "ibmmq.qmgr", + "duration": 115000, + "module": "ibmmq" + }, + "metricset": { + "name": "qmgr", + "period": 10000 + }, + "prometheus": { + "labels": { + "instance": "127.0.0.1:55911", + "job": "ibmmq", + "qmgr": "QM1" + }, + "metrics": { + "ibmmq_qmgr_commit_total": 0, + "ibmmq_qmgr_destructive_get_bytes_total": 7812, + "ibmmq_qmgr_destructive_get_total": 23, + "ibmmq_qmgr_durable_subscription_alter_total": 0, + "ibmmq_qmgr_durable_subscription_create_total": 0, + "ibmmq_qmgr_durable_subscription_delete_total": 0, + "ibmmq_qmgr_durable_subscription_resume_total": 0, + "ibmmq_qmgr_expired_message_total": 0, + "ibmmq_qmgr_failed_browse_total": 0, + "ibmmq_qmgr_failed_mqcb_total": 0, + "ibmmq_qmgr_failed_mqclose_total": 0, + "ibmmq_qmgr_failed_mqconn_mqconnx_total": 0, + "ibmmq_qmgr_failed_mqget_total": 16, + "ibmmq_qmgr_failed_mqinq_total": 0, + "ibmmq_qmgr_failed_mqopen_total": 0, + "ibmmq_qmgr_failed_mqput1_total": 0, + "ibmmq_qmgr_failed_mqput_total": 0, + "ibmmq_qmgr_failed_mqset_total": 0, + "ibmmq_qmgr_failed_mqsubrq_total": 0, + "ibmmq_qmgr_failed_subscription_create_alter_resume_total": 0, + "ibmmq_qmgr_failed_subscription_delete_total": 0, + "ibmmq_qmgr_failed_topic_mqput_mqput1_total": 0, + "ibmmq_qmgr_log_logical_written_bytes_total": 0, + "ibmmq_qmgr_log_physical_written_bytes_total": 0, + "ibmmq_qmgr_mqcb_total": 0, + "ibmmq_qmgr_mqclose_total": 0, + "ibmmq_qmgr_mqconn_mqconnx_total": 0, + "ibmmq_qmgr_mqctl_total": 0, + "ibmmq_qmgr_mqdisc_total": 0, + "ibmmq_qmgr_mqinq_total": 4, + "ibmmq_qmgr_mqopen_total": 0, + "ibmmq_qmgr_mqput_mqput1_bytes_total": 1860, + "ibmmq_qmgr_mqput_mqput1_total": 6, + "ibmmq_qmgr_mqset_total": 0, + "ibmmq_qmgr_mqstat_total": 0, + "ibmmq_qmgr_mqsubrq_total": 0, + "ibmmq_qmgr_non_durable_subscription_create_total": 0, + "ibmmq_qmgr_non_durable_subscription_delete_total": 0, + "ibmmq_qmgr_non_persistent_message_browse_bytes_total": 0, + "ibmmq_qmgr_non_persistent_message_browse_total": 0, + "ibmmq_qmgr_non_persistent_message_destructive_get_total": 23, + "ibmmq_qmgr_non_persistent_message_get_bytes_total": 7812, + "ibmmq_qmgr_non_persistent_message_mqput1_total": 0, + "ibmmq_qmgr_non_persistent_message_mqput_total": 6, + "ibmmq_qmgr_non_persistent_message_put_bytes_total": 1860, + "ibmmq_qmgr_non_persistent_topic_mqput_mqput1_total": 6, + "ibmmq_qmgr_persistent_message_browse_bytes_total": 0, + "ibmmq_qmgr_persistent_message_browse_total": 0, + "ibmmq_qmgr_persistent_message_destructive_get_total": 0, + "ibmmq_qmgr_persistent_message_get_bytes_total": 0, + "ibmmq_qmgr_persistent_message_mqput1_total": 0, + "ibmmq_qmgr_persistent_message_mqput_total": 0, + "ibmmq_qmgr_persistent_message_put_bytes_total": 0, + "ibmmq_qmgr_persistent_topic_mqput_mqput1_total": 0, + "ibmmq_qmgr_published_to_subscribers_bytes_total": 1224, + "ibmmq_qmgr_published_to_subscribers_message_total": 6, + "ibmmq_qmgr_purged_queue_total": 0, + "ibmmq_qmgr_rollback_total": 0, + "ibmmq_qmgr_topic_mqput_mqput1_total": 6, + "ibmmq_qmgr_topic_put_bytes_total": 1224 + } + }, + "service": { + "address": "127.0.0.1:55555", + "type": "ibmmq" + } + } +] diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/manifest.yml b/x-pack/metricbeat/module/ibmmq/qmgr/manifest.yml new file mode 100644 index 000000000000..ec802f1ca1b3 --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/manifest.yml @@ -0,0 +1,6 @@ +default: true +input: + module: prometheus + metricset: collector + defaults: + metrics_path: /metrics diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/qmgr_integration_test.go b/x-pack/metricbeat/module/ibmmq/qmgr/qmgr_integration_test.go new file mode 100644 index 000000000000..4261a9fbf89f --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/qmgr_integration_test.go @@ -0,0 +1,48 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// +build integration + +package stats + +import ( + "os" + "testing" + + "github.com/stretchr/testify/assert" + + "github.com/elastic/beats/libbeat/tests/compose" + "github.com/elastic/beats/metricbeat/mb" + mbtest "github.com/elastic/beats/metricbeat/mb/testing" + + // Register input module and metricset + _ "github.com/elastic/beats/metricbeat/module/prometheus" + _ "github.com/elastic/beats/metricbeat/module/prometheus/collector" +) + +func init() { + // To be moved to some kind of helper + os.Setenv("BEAT_STRICT_PERMS", "false") + mb.Registry.SetSecondarySource(mb.NewLightModulesSource("../../../module")) +} + +func TestFetch(t *testing.T) { + service := compose.EnsureUp(t, "ibmmq") + + f := mbtest.NewFetcher(t, getConfig(service.Host())) + events, errs := f.FetchEvents() + if len(errs) > 0 { + t.Fatalf("Expected 0 error, had %d. %v\n", len(errs), errs) + } + assert.NotEmpty(t, events) + t.Logf("%s/%s event: %+v", f.Module().Name(), f.Name(), events[0]) +} + +func getConfig(host string) map[string]interface{} { + return map[string]interface{}{ + "module": "ibmmq", + "metricsets": []string{"qmgr"}, + "hosts": []string{host}, + } +} diff --git a/x-pack/metricbeat/module/ibmmq/qmgr/qmgr_test.go b/x-pack/metricbeat/module/ibmmq/qmgr/qmgr_test.go new file mode 100644 index 000000000000..458bd7e5dc0a --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/qmgr/qmgr_test.go @@ -0,0 +1,32 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// +build !integration + +package qmgr + +import ( + "os" + "testing" + + "github.com/elastic/beats/libbeat/logp" + "github.com/elastic/beats/metricbeat/mb" + mbtest "github.com/elastic/beats/metricbeat/mb/testing" + + // Register input module and metricset + _ "github.com/elastic/beats/metricbeat/module/prometheus" + _ "github.com/elastic/beats/metricbeat/module/prometheus/collector" +) + +func init() { + // To be moved to some kind of helper + os.Setenv("BEAT_STRICT_PERMS", "false") + mb.Registry.SetSecondarySource(mb.NewLightModulesSource("../../../module")) +} + +func TestEventMapping(t *testing.T) { + logp.TestingSetup() + + mbtest.TestDataFiles(t, "ibmmq", "qmgr") +} diff --git a/x-pack/metricbeat/module/ibmmq/test_ibmmq.py b/x-pack/metricbeat/module/ibmmq/test_ibmmq.py new file mode 100644 index 000000000000..c882860d6bda --- /dev/null +++ b/x-pack/metricbeat/module/ibmmq/test_ibmmq.py @@ -0,0 +1,35 @@ +import os +import sys +import unittest + +sys.path.append(os.path.join(os.path.dirname(__file__), '../../tests/system')) +from xpack_metricbeat import XPackTest, metricbeat + + +class Test(XPackTest): + + COMPOSE_SERVICES = ['ibmmq'] + + @unittest.skipUnless(metricbeat.INTEGRATION_TESTS, "integration test") + def test_qmgr(self): + """ + ibmmq qmgr test + """ + self.render_config_template(modules=[{ + "name": "ibmmq", + "metricsets": ["qmgr"], + "hosts": self.get_hosts(), + "period": "5s", + }]) + proc = self.start_beat(home=self.beat_path) + self.wait_until(lambda: self.output_lines() > 0) + proc.check_kill_and_wait() + self.assert_no_logged_warnings() + + output = self.read_output_json() + self.assertGreater(len(output), 0) + + for evt in output: + self.assert_fields_are_documented(evt) + self.assertIn("prometheus", evt.keys(), evt) + self.assertIn("metrics", evt["prometheus"].keys(), evt) diff --git a/x-pack/metricbeat/module/istio/_meta/Dockerfile b/x-pack/metricbeat/module/istio/_meta/Dockerfile new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/x-pack/metricbeat/module/istio/_meta/config.reference.yml b/x-pack/metricbeat/module/istio/_meta/config.reference.yml new file mode 100644 index 000000000000..8fe23bdf2add --- /dev/null +++ b/x-pack/metricbeat/module/istio/_meta/config.reference.yml @@ -0,0 +1,4 @@ +- module: istio + metricsets: ["mesh"] + period: 10s + hosts: ["localhost:42422"] diff --git a/x-pack/metricbeat/module/istio/_meta/config.yml b/x-pack/metricbeat/module/istio/_meta/config.yml new file mode 100644 index 000000000000..8fe23bdf2add --- /dev/null +++ b/x-pack/metricbeat/module/istio/_meta/config.yml @@ -0,0 +1,4 @@ +- module: istio + metricsets: ["mesh"] + period: 10s + hosts: ["localhost:42422"] diff --git a/x-pack/metricbeat/module/istio/_meta/docs.asciidoc b/x-pack/metricbeat/module/istio/_meta/docs.asciidoc new file mode 100644 index 000000000000..24b262314e5a --- /dev/null +++ b/x-pack/metricbeat/module/istio/_meta/docs.asciidoc @@ -0,0 +1,9 @@ +This is the Istio module. The Istio module collects metrics from the +Istio https://istio.io/docs/tasks/observability/metrics/querying-metrics/#about-the-prometheus-add-on[prometheus exporters endpoints]. + +The default metricset is `mesh`. + +[float] +=== Compatibility + +The Istio module is tested with Istio 1.4 diff --git a/x-pack/metricbeat/module/istio/_meta/fields.yml b/x-pack/metricbeat/module/istio/_meta/fields.yml new file mode 100644 index 000000000000..62608c294b14 --- /dev/null +++ b/x-pack/metricbeat/module/istio/_meta/fields.yml @@ -0,0 +1,11 @@ +- key: istio + title: "istio" + description: > + istio Module + release: beta + fields: + - name: istio + type: group + description: > + `istio` contains statistics that were read from Istio + fields: diff --git a/x-pack/metricbeat/module/istio/doc.go b/x-pack/metricbeat/module/istio/doc.go new file mode 100644 index 000000000000..facfbe7beb14 --- /dev/null +++ b/x-pack/metricbeat/module/istio/doc.go @@ -0,0 +1,6 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// Package istio is a Metricbeat module that contains MetricSets. +package istio diff --git a/x-pack/metricbeat/module/istio/docker-compose.yml b/x-pack/metricbeat/module/istio/docker-compose.yml new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/x-pack/metricbeat/module/istio/fields.go b/x-pack/metricbeat/module/istio/fields.go new file mode 100644 index 000000000000..a6cfe7f83177 --- /dev/null +++ b/x-pack/metricbeat/module/istio/fields.go @@ -0,0 +1,23 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// Code generated by beats/dev-tools/cmd/asset/asset.go - DO NOT EDIT. + +package istio + +import ( + "github.com/elastic/beats/libbeat/asset" +) + +func init() { + if err := asset.SetFields("metricbeat", "istio", asset.ModuleFieldsPri, AssetIstio); err != nil { + panic(err) + } +} + +// AssetIstio returns asset data. +// This is the base64 encoded gzipped contents of module/istio. +func AssetIstio() string { + return "eJzUmM9y27gPx+9+CkyPv2n1AD78Znb2sjnsTKebe0uRcMSaIrgEGNd9+h3qj6PIjOI49rqrQyamRHw/AEgQ0ifY4n4NlsXSCkCsOFzDh+73hxWAQdbRBrHk1/D/FQD0z8KfZJLDFUBEh4pxDTWKWgFsLDrD6+7RT+BVi0/m8yX7gGt4iJTCMFLQyNe3btY30ORFWc/AoiSPaQZplMAOI0JEZWATqYW7icgUYgrSIjeHwRLLAk++fi+wRHRK0IAQSIM9RicEjPHRapxYmAdrvOa8z4LnWZR/ZuaJXfCHzG4s4OfrvkEIkVqUBhOXrY/S36kuqm5xv6No3iU8tz1qRvw7IQsXhR35hzeqkih3MAqN8sahgXoPyg+5CpF+7JdYKpOiygJVy1Wd9Bal+l+Rj+rvqOcJ6Qe/nuvBlx4CRghoLAs9RNVCz5KTCK11zjJq8oZP9oVTe2qYNxRbJesDxVk+8GH6R+DUAm0OA+c7oSn5ecjfGespp09tjTGjFlfmnIztT6zqveANV8pf9icer5JTsd+wKM7jmQsssFwht3Oa5xIvgHEgz/jfS+4x9w2zewzzq6Q3UBSMlzvqvgwWwRr0YjcWuesNRqVcTfrf/YqHOwHLwCi5izDIYn1f6+1mmJTvd22O6toKjNPDC5Q3wJSixuIM7Sx6mc6oipHoTVQ7iltHylR59JINgOV5RLJCjsYAPyrDrrG66Vq/SK5/dIA7GZyDeqFpuih9JzMmdObGImyI1msblLsuY0CMcNAaQSWqzcbqkWWyAHcN+n6SStJkU3pYigyJcdknFcJ1vRkCrEKAWjEaIN/9cKpGd04WHjHycTtzYepB5FW+IuikHNxiZ06r0XJEXwS9xU48D/uX2JNToHdvzKmxq+/OadCXt2gxPa97cIvdurySXkUePgJUDXG51bkI9xRyUISsOLZdtnaY3+HyW6/1mlrrHw7tx8k+XL/qlPzIqm9j/LcLToH6lRUyhj5EEtJ07XozqCz3nb99vnt60m7y/4/WoPkIJA3GneXDTKCYmzOPuvN5nFXO0qHz12TKWTnjs07By1EHss6Rq92MvgZZhhCRc0tM3u1zgfrj/v4ztCjRai578eRuxahTtLKvAjmr91fueIZdMCv6vfRyQtskSbmv4rg/PPr+fzguugfUNkerbZMf7Xa+YfdCMXuNeOFYEoLkt552vheZzRp6HbY+/x0CN9Jr5T0J1N0KDRhdvhNS9z21Wv0TAAD///m4gQY=" +} diff --git a/x-pack/metricbeat/module/istio/mesh/_meta/data.json b/x-pack/metricbeat/module/istio/mesh/_meta/data.json new file mode 100644 index 000000000000..fd43fc1aacab --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/_meta/data.json @@ -0,0 +1,112 @@ +{ + "@timestamp": "2019-03-01T08:05:34.853Z", + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "unknown" + } + }, + "destination": { + "app": "reviews", + "principal": "unknown", + "service": { + "host": "details.default.svc.cluster.local", + "name": "details", + "namespace": "default" + }, + "version": "v1", + "workload": { + "name": "reviews-v1", + "namespace": "default" + } + }, + "reporter": "source", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "25": 1, + "250": 1, + "2500": 1, + "5": 0, + "50": 1, + "500": 1, + "5000": 1 + }, + "count": 1, + "sum": 5.815905 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 178 + } + } + }, + "source": { + "app": "productpage", + "principal": "unknown", + "version": "v1", + "workload": { + "name": "productpage-v1", + "namespace": "default" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } +} \ No newline at end of file diff --git a/x-pack/metricbeat/module/istio/mesh/_meta/docs.asciidoc b/x-pack/metricbeat/module/istio/mesh/_meta/docs.asciidoc new file mode 100644 index 000000000000..19457b9f51e5 --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/_meta/docs.asciidoc @@ -0,0 +1 @@ +This is the mesh metricset of the module istio. This metricset collects all Mixer-generated metrics. diff --git a/x-pack/metricbeat/module/istio/mesh/_meta/fields.yml b/x-pack/metricbeat/module/istio/mesh/_meta/fields.yml new file mode 100644 index 000000000000..3e7534b2c988 --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/_meta/fields.yml @@ -0,0 +1,131 @@ +- name: mesh + type: group + description: > + Contains statistics related to the Istio mesh service + release: beta + fields: + - name: instance + type: text + description: > + The prometheus instance + - name: job + type: keyword + description: > + The prometheus job + - name: requests + type: long + description: > + Total requests handled by an Istio proxy + - name: request.duration.ms.bucket.* + type: object + object_type: long + description: > + Request duration histogram buckets in milliseconds + - name: request.duration.ms.sum + type: long + format: duration + description: > + Requests duration, sum of durations in milliseconds + - name: request.duration.ms.count + type: long + description: > + Requests duration, number of requests + - name: request.size.bytes.bucket.* + type: object + object_type: long + description: > + Request Size histogram buckets + - name: request.size.bytes.sum + type: long + description: > + Request Size histogram sum + - name: request.size.bytes.count + type: long + description: > + Request Size histogram count + + - name: response.size.bytes.bucket.* + type: object + object_type: long + description: > + Request Size histogram buckets + - name: response.size.bytes.sum + type: long + description: > + Request Size histogram sum + - name: response.size.bytes.count + type: long + description: > + Request Size histogram count + + - name: reporter + type: keyword + description: > + Reporter identifies the reporter of the request. It is set to destination if report is from a server Istio proxy and source if report is from a client Istio proxy. + - name: source.workload.name + type: keyword + description: > + This identifies the name of source workload which controls the source. + - name: source.workload.namespace + type: keyword + description: > + This identifies the namespace of the source workload. + - name: source.principal + type: keyword + description: > + This identifies the peer principal of the traffic source. It is set when peer authentication is used. + - name: source.app + type: keyword + description: > + This identifies the source app based on app label of the source workload. + - name: source.version + type: keyword + description: > + This identifies the version of the source workload. + + - name: destination.workload.name + type: keyword + description: > + This identifies the name of destination workload. + - name: destination.workload.namespace + type: keyword + description: > + This identifies the namespace of the destination workload. + - name: destination.principal + type: keyword + description: > + This identifies the peer principal of the traffic destination. It is set when peer authentication is used. + - name: destination.app + type: keyword + description: > + This identifies the destination app based on app label of the destination workload.. + - name: destination.version + type: keyword + description: > + This identifies the version of the destination workload. + + - name: destination.service.host + type: keyword + description: > + This identifies destination service host responsible for an incoming request. + - name: destination.service.name + type: keyword + description: > + This identifies the destination service name. + - name: destination.service.namespace + type: keyword + description: > + This identifies the namespace of destination service. + + - name: request.protocol + type: keyword + description: > + This identifies the protocol of the request. It is set to API protocol if provided, otherwise request or connection protocol. + - name: response.code + type: long + description: > + This identifies the response code of the request. This label is present only on HTTP metrics. + - name: connection.security.policy + type: keyword + description: > + This identifies the service authentication policy of the request. It is set to mutual_tls when Istio is used to make communication secure and report is from destination. It is set to unknown when report is from source since security policy cannot be properly populated. diff --git a/x-pack/metricbeat/module/istio/mesh/_meta/testdata/config.yml b/x-pack/metricbeat/module/istio/mesh/_meta/testdata/config.yml new file mode 100644 index 000000000000..ab6bf2416543 --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/_meta/testdata/config.yml @@ -0,0 +1,3 @@ +type: http +url: "/metrics" +suffix: plain diff --git a/x-pack/metricbeat/module/istio/mesh/_meta/testdata/docs.plain b/x-pack/metricbeat/module/istio/mesh/_meta/testdata/docs.plain new file mode 100644 index 000000000000..c7e0ee303a88 --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/_meta/testdata/docs.plain @@ -0,0 +1,281 @@ +# HELP istio_request_bytes request_bytes +# TYPE istio_request_bytes histogram +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_bytes_sum{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0 +istio_request_bytes_count{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+06"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+07"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+08"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="+Inf"} 1 +istio_request_bytes_sum{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 0 +istio_request_bytes_count{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_request_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_bytes_sum{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0 +istio_request_bytes_count{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_bytes_sum{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0 +istio_request_bytes_count{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+06"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+07"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+08"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="+Inf"} 1 +istio_request_bytes_sum{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 0 +istio_request_bytes_count{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_bytes_sum{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0 +istio_request_bytes_count{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="10"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="100"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="10000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="100000"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1e+06"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1e+07"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1e+08"} 1 +istio_request_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="+Inf"} 1 +istio_request_bytes_sum{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default"} 0 +istio_request_bytes_count{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default"} 1 +# HELP istio_request_duration_seconds request_duration_seconds +# TYPE istio_request_duration_seconds histogram +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.005"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.01"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.025"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.05"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.25"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="2.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_duration_seconds_sum{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0.001650216 +istio_request_duration_seconds_count{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.005"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.01"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.025"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.05"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.1"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.25"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.5"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="2.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="+Inf"} 1 +istio_request_duration_seconds_sum{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 0.656875796 +istio_request_duration_seconds_count{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.005"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.01"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.025"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.05"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.1"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.25"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.5"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="2.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_duration_seconds_sum{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0.604445345 +istio_request_duration_seconds_count{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.005"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.01"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.025"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.05"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.1"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.25"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.5"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="2.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_duration_seconds_sum{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0.605792291 +istio_request_duration_seconds_count{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.005"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.01"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.025"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.05"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.1"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.25"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="0.5"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="2.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="+Inf"} 1 +istio_request_duration_seconds_sum{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 0.657740289 +istio_request_duration_seconds_count{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.005"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.01"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.025"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.05"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.25"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="0.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="2.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_request_duration_seconds_sum{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 0.005815905 +istio_request_duration_seconds_count{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="0.005"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="0.01"} 0 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="0.025"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="0.05"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="0.1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="0.25"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="0.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="2.5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="5"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="10"} 1 +istio_request_duration_seconds_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="+Inf"} 1 +istio_request_duration_seconds_sum{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default"} 0.016062491 +istio_request_duration_seconds_count{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default"} 1 +# HELP istio_requests_total requests_total +# TYPE istio_requests_total counter +istio_requests_total{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_requests_total{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_requests_total{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_requests_total{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_requests_total{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_requests_total{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_requests_total{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default"} 1 +# HELP istio_response_bytes response_bytes +# TYPE istio_response_bytes histogram +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_response_bytes_sum{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 178 +istio_response_bytes_count{connection_security_policy="none",destination_app="details",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1000"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+06"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+07"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+08"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="+Inf"} 1 +istio_response_bytes_sum{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 5183 +istio_response_bytes_count{connection_security_policy="none",destination_app="productpage",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="productpage-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 0 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_response_bytes_bucket{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_response_bytes_sum{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 379 +istio_response_bytes_count{connection_security_policy="none",destination_app="reviews",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v2",destination_workload="reviews-v2",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_response_bytes_sum{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 379 +istio_response_bytes_count{connection_security_policy="unknown",destination_app="details",destination_principal="unknown",destination_service="reviews.default.svc.cluster.local",destination_service_name="reviews",destination_service_namespace="default",destination_version="v1",destination_workload="details-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1000"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="10000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="100000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+06"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+07"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="1e+08"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system",le="+Inf"} 1 +istio_response_bytes_sum{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 5183 +istio_response_bytes_count{connection_security_policy="unknown",destination_app="ratings",destination_principal="unknown",destination_service="productpage.default.svc.cluster.local",destination_service_name="productpage",destination_service_namespace="default",destination_version="v1",destination_workload="ratings-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="istio-ingressgateway",source_principal="unknown",source_version="unknown",source_workload="istio-ingressgateway",source_workload_namespace="istio-system"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="10000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="100000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+06"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+07"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="1e+08"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default",le="+Inf"} 1 +istio_response_bytes_sum{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 178 +istio_response_bytes_count{connection_security_policy="unknown",destination_app="reviews",destination_principal="unknown",destination_service="details.default.svc.cluster.local",destination_service_name="details",destination_service_namespace="default",destination_version="v1",destination_workload="reviews-v1",destination_workload_namespace="default",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="productpage",source_principal="unknown",source_version="v1",source_workload="productpage-v1",source_workload_namespace="default"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="10"} 0 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="100"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="10000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="100000"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1e+06"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1e+07"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="1e+08"} 1 +istio_response_bytes_bucket{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default",le="+Inf"} 1 +istio_response_bytes_sum{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default"} 48 +istio_response_bytes_count{connection_security_policy="unknown",destination_app="sidecarInjectorWebhook",destination_principal="unknown",destination_service="ratings.default.svc.cluster.local",destination_service_name="ratings",destination_service_namespace="default",destination_version="unknown",destination_workload="istio-sidecar-injector",destination_workload_namespace="istio-system",permissive_response_code="none",permissive_response_policyid="none",reporter="source",request_protocol="http",response_code="200",response_flags="-",source_app="reviews",source_principal="unknown",source_version="v2",source_workload="reviews-v2",source_workload_namespace="default"} 1 diff --git a/x-pack/metricbeat/module/istio/mesh/_meta/testdata/docs.plain-expected.json b/x-pack/metricbeat/module/istio/mesh/_meta/testdata/docs.plain-expected.json new file mode 100644 index 000000000000..4b9c124c1205 --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/_meta/testdata/docs.plain-expected.json @@ -0,0 +1,779 @@ +[ + { + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "unknown" + } + }, + "destination": { + "app": "reviews", + "principal": "unknown", + "service": { + "host": "details.default.svc.cluster.local", + "name": "details", + "namespace": "default" + }, + "version": "v1", + "workload": { + "name": "reviews-v1", + "namespace": "default" + } + }, + "reporter": "source", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "25": 1, + "250": 1, + "2500": 1, + "5": 0, + "50": 1, + "500": 1, + "5000": 1 + }, + "count": 1, + "sum": 5.815905 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 178 + } + } + }, + "source": { + "app": "productpage", + "principal": "unknown", + "version": "v1", + "workload": { + "name": "productpage-v1", + "namespace": "default" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } + }, + { + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "none" + } + }, + "destination": { + "app": "productpage", + "principal": "unknown", + "service": { + "host": "productpage.default.svc.cluster.local", + "name": "productpage", + "namespace": "default" + }, + "version": "v1", + "workload": { + "name": "productpage-v1", + "namespace": "default" + } + }, + "reporter": "destination", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "25": 0, + "250": 0, + "2500": 1, + "5": 0, + "50": 0, + "500": 0, + "5000": 1 + }, + "count": 1, + "sum": 656.875796 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 0, + "1000": 0, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 5183 + } + } + }, + "source": { + "app": "istio-ingressgateway", + "principal": "unknown", + "version": "unknown", + "workload": { + "name": "istio-ingressgateway", + "namespace": "istio-system" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } + }, + { + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "unknown" + } + }, + "destination": { + "app": "sidecarInjectorWebhook", + "principal": "unknown", + "service": { + "host": "ratings.default.svc.cluster.local", + "name": "ratings", + "namespace": "default" + }, + "version": "unknown", + "workload": { + "name": "istio-sidecar-injector", + "namespace": "istio-system" + } + }, + "reporter": "source", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 0, + "100": 1, + "1000": 1, + "10000": 1, + "25": 1, + "250": 1, + "2500": 1, + "5": 0, + "50": 1, + "500": 1, + "5000": 1 + }, + "count": 1, + "sum": 16.062491 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 48 + } + } + }, + "source": { + "app": "reviews", + "principal": "unknown", + "version": "v2", + "workload": { + "name": "reviews-v2", + "namespace": "default" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } + }, + { + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "none" + } + }, + "destination": { + "app": "details", + "principal": "unknown", + "service": { + "host": "details.default.svc.cluster.local", + "name": "details", + "namespace": "default" + }, + "version": "v1", + "workload": { + "name": "details-v1", + "namespace": "default" + } + }, + "reporter": "destination", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "25": 1, + "250": 1, + "2500": 1, + "5": 1, + "50": 1, + "500": 1, + "5000": 1 + }, + "count": 1, + "sum": 1.650216 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 178 + } + } + }, + "source": { + "app": "productpage", + "principal": "unknown", + "version": "v1", + "workload": { + "name": "productpage-v1", + "namespace": "default" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } + }, + { + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "none" + } + }, + "destination": { + "app": "reviews", + "principal": "unknown", + "service": { + "host": "reviews.default.svc.cluster.local", + "name": "reviews", + "namespace": "default" + }, + "version": "v2", + "workload": { + "name": "reviews-v2", + "namespace": "default" + } + }, + "reporter": "destination", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "25": 0, + "250": 0, + "2500": 1, + "5": 0, + "50": 0, + "500": 0, + "5000": 1 + }, + "count": 1, + "sum": 604.445345 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 379 + } + } + }, + "source": { + "app": "productpage", + "principal": "unknown", + "version": "v1", + "workload": { + "name": "productpage-v1", + "namespace": "default" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } + }, + { + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "unknown" + } + }, + "destination": { + "app": "details", + "principal": "unknown", + "service": { + "host": "reviews.default.svc.cluster.local", + "name": "reviews", + "namespace": "default" + }, + "version": "v1", + "workload": { + "name": "details-v1", + "namespace": "default" + } + }, + "reporter": "source", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "25": 0, + "250": 0, + "2500": 1, + "5": 0, + "50": 0, + "500": 0, + "5000": 1 + }, + "count": 1, + "sum": 605.792291 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 379 + } + } + }, + "source": { + "app": "productpage", + "principal": "unknown", + "version": "v1", + "workload": { + "name": "productpage-v1", + "namespace": "default" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } + }, + { + "event": { + "dataset": "istio.mesh", + "duration": 115000, + "module": "istio" + }, + "istio": { + "mesh": { + "connection": { + "security": { + "policy": "unknown" + } + }, + "destination": { + "app": "ratings", + "principal": "unknown", + "service": { + "host": "productpage.default.svc.cluster.local", + "name": "productpage", + "namespace": "default" + }, + "version": "v1", + "workload": { + "name": "ratings-v1", + "namespace": "default" + } + }, + "reporter": "source", + "request": { + "duration": { + "ms": { + "bucket": { + "+Inf": 1, + "10": 0, + "100": 0, + "1000": 1, + "10000": 1, + "25": 0, + "250": 0, + "2500": 1, + "5": 0, + "50": 0, + "500": 0, + "5000": 1 + }, + "count": 1, + "sum": 657.7402890000001 + } + }, + "protocol": "http", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 1, + "10": 1, + "100": 1, + "1000": 1, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 0 + } + } + }, + "requests": 1, + "response": { + "code": "200", + "size": { + "bytes": { + "bucket": { + "+Inf": 1, + "1": 0, + "10": 0, + "100": 0, + "1000": 0, + "10000": 1, + "100000": 1, + "1000000": 1, + "10000000": 1, + "100000000": 1 + }, + "count": 1, + "sum": 5183 + } + } + }, + "source": { + "app": "istio-ingressgateway", + "principal": "unknown", + "version": "unknown", + "workload": { + "name": "istio-ingressgateway", + "namespace": "istio-system" + } + } + } + }, + "metricset": { + "name": "mesh", + "period": 10000 + }, + "service": { + "address": "127.0.0.1:55555", + "type": "istio" + } + } +] \ No newline at end of file diff --git a/x-pack/metricbeat/module/istio/mesh/mesh.go b/x-pack/metricbeat/module/istio/mesh/mesh.go new file mode 100644 index 000000000000..e375d5ce7282 --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/mesh.go @@ -0,0 +1,60 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +package mesh + +import ( + "github.com/elastic/beats/metricbeat/helper/prometheus" + "github.com/elastic/beats/metricbeat/mb" + "github.com/elastic/beats/metricbeat/mb/parse" +) + +const ( + defaultScheme = "http" + defaultPath = "/metrics" +) + +var ( + hostParser = parse.URLHostParserBuilder{ + DefaultScheme: defaultScheme, + DefaultPath: defaultPath, + }.Build() +) + +var mapping = &prometheus.MetricsMapping{ + Metrics: map[string]prometheus.MetricMap{ + "istio_requests_total": prometheus.Metric("requests"), + "istio_request_duration_seconds": prometheus.Metric("request.duration.ms", prometheus.OpMultiplyBuckets(1000)), + "istio_request_bytes": prometheus.Metric("request.size.bytes"), + "istio_response_bytes": prometheus.Metric("response.size.bytes"), + }, + + Labels: map[string]prometheus.LabelMap{ + "instance": prometheus.KeyLabel("instance"), + "job": prometheus.KeyLabel("job"), + "source_workload": prometheus.KeyLabel("source.workload.name"), + "source_workload_namespace": prometheus.KeyLabel("source.workload.namespace"), + "source_principal": prometheus.KeyLabel("source.principal"), + "source_app": prometheus.KeyLabel("source.app"), + "source_version": prometheus.KeyLabel("source.version"), + "destination_workload": prometheus.KeyLabel("destination.workload.name"), + "destination_workload_namespace": prometheus.KeyLabel("destination.workload.namespace"), + "destination_principal": prometheus.KeyLabel("destination.principal"), + "destination_app": prometheus.KeyLabel("destination.app"), + "destination_version": prometheus.KeyLabel("destination.version"), + "destination_service": prometheus.KeyLabel("destination.service.host"), + "destination_service_name": prometheus.KeyLabel("destination.service.name"), + "destination_service_namespace": prometheus.KeyLabel("destination.service.namespace"), + "reporter": prometheus.KeyLabel("reporter"), + "request_protocol": prometheus.KeyLabel("request.protocol"), + "response_code": prometheus.KeyLabel("response.code"), + "connection_security_policy": prometheus.KeyLabel("connection.security.policy"), + }, +} + +func init() { + mb.Registry.MustAddMetricSet("istio", "mesh", + prometheus.MetricSetBuilder(mapping), + mb.WithHostParser(hostParser)) +} diff --git a/x-pack/metricbeat/module/istio/mesh/mesh_test.go b/x-pack/metricbeat/module/istio/mesh/mesh_test.go new file mode 100644 index 000000000000..801a2330992a --- /dev/null +++ b/x-pack/metricbeat/module/istio/mesh/mesh_test.go @@ -0,0 +1,19 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// +build !integration + +package mesh + +import ( + "testing" + + mbtest "github.com/elastic/beats/metricbeat/mb/testing" + + _ "github.com/elastic/beats/x-pack/metricbeat/module/istio" +) + +func TestData(t *testing.T) { + mbtest.TestDataFiles(t, "istio", "mesh") +} diff --git a/x-pack/metricbeat/module/istio/module.yaml b/x-pack/metricbeat/module/istio/module.yaml new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/x-pack/metricbeat/module/istio/module.yaml @@ -0,0 +1 @@ + diff --git a/x-pack/metricbeat/module/oracle/connection.go b/x-pack/metricbeat/module/oracle/connection.go index b418c4ac5f88..96add9ced083 100644 --- a/x-pack/metricbeat/module/oracle/connection.go +++ b/x-pack/metricbeat/module/oracle/connection.go @@ -7,7 +7,7 @@ package oracle import ( "database/sql" - "gopkg.in/goracle.v2" + "github.com/godror/godror" "github.com/elastic/beats/metricbeat/mb" "github.com/elastic/beats/metricbeat/mb/parse" @@ -37,7 +37,7 @@ func init() { // NewConnection returns a connection already established with Oracle func NewConnection(c *ConnectionDetails) (*sql.DB, error) { - params, err := goracle.ParseConnString(c.Hosts[0]) + params, err := godror.ParseConnString(c.Hosts[0]) if err != nil { return nil, errors.Wrap(err, "error trying to parse connection string in field 'hosts'") } @@ -54,7 +54,7 @@ func NewConnection(c *ConnectionDetails) (*sql.DB, error) { return nil, errors.New("a user with DBA permissions are required, check your connection details on field `hosts`") } - db, err := sql.Open("goracle", params.StringWithPassword()) + db, err := sql.Open("godror", params.StringWithPassword()) if err != nil { return nil, errors.Wrap(err, "could not open database") } diff --git a/x-pack/metricbeat/module/oracle/performance/metricset_test.go b/x-pack/metricbeat/module/oracle/performance/metricset_test.go index b6182b5b41d8..80d16a4e495e 100644 --- a/x-pack/metricbeat/module/oracle/performance/metricset_test.go +++ b/x-pack/metricbeat/module/oracle/performance/metricset_test.go @@ -9,7 +9,7 @@ package performance import ( "testing" - _ "gopkg.in/goracle.v2" + _ "github.com/godror/godror" "github.com/elastic/beats/libbeat/common" mbtest "github.com/elastic/beats/metricbeat/mb/testing" diff --git a/x-pack/metricbeat/module/oracle/tablespace/metricset_test.go b/x-pack/metricbeat/module/oracle/tablespace/metricset_test.go index a7934647826e..f0bb4b672d8a 100644 --- a/x-pack/metricbeat/module/oracle/tablespace/metricset_test.go +++ b/x-pack/metricbeat/module/oracle/tablespace/metricset_test.go @@ -9,7 +9,7 @@ package tablespace import ( "testing" - _ "gopkg.in/goracle.v2" + _ "github.com/godror/godror" "github.com/elastic/beats/libbeat/tests/compose" mbtest "github.com/elastic/beats/metricbeat/mb/testing" diff --git a/x-pack/metricbeat/module/oracle/testing.go b/x-pack/metricbeat/module/oracle/testing.go index 06c87b10b562..5ffe9cd83f44 100644 --- a/x-pack/metricbeat/module/oracle/testing.go +++ b/x-pack/metricbeat/module/oracle/testing.go @@ -8,12 +8,12 @@ import ( "fmt" "os" - "gopkg.in/goracle.v2" + "github.com/godror/godror" ) // GetOracleConnectionDetails return a valid SID to use for testing func GetOracleConnectionDetails(host string) string { - params := goracle.ConnectionParams{ + params := godror.ConnectionParams{ SID: fmt.Sprintf("%s/%s", host, GetOracleEnvServiceName()), Username: GetOracleEnvUsername(), Password: GetOracleEnvPassword(), diff --git a/x-pack/metricbeat/module/sql/_meta/config.yml b/x-pack/metricbeat/module/sql/_meta/config.yml index 92b5f3882210..5c6a419e77ba 100644 --- a/x-pack/metricbeat/module/sql/_meta/config.yml +++ b/x-pack/metricbeat/module/sql/_meta/config.yml @@ -2,9 +2,8 @@ metricsets: - query period: 10s - hosts: ["localhost"] + hosts: ["user=myuser password=mypassword dbname=mydb sslmode=disable"] driver: "postgres" - datasource: "user=myuser password=mypassword dbname=mydb sslmode=disable" sql_query: "select now()" diff --git a/x-pack/metricbeat/module/sql/_meta/docs.asciidoc b/x-pack/metricbeat/module/sql/_meta/docs.asciidoc index d7bb818a58b8..f22edf1fa200 100644 --- a/x-pack/metricbeat/module/sql/_meta/docs.asciidoc +++ b/x-pack/metricbeat/module/sql/_meta/docs.asciidoc @@ -1,3 +1,3 @@ -This is the sql module that fetches metrics from a SQL database. You can define driver, datasource and SQL query. +This is the sql module that fetches metrics from a SQL database. You can define driver and SQL query. diff --git a/x-pack/metricbeat/module/sql/docker-compose.yml b/x-pack/metricbeat/module/sql/docker-compose.yml new file mode 100644 index 000000000000..a053c322d304 --- /dev/null +++ b/x-pack/metricbeat/module/sql/docker-compose.yml @@ -0,0 +1,12 @@ +version: '2.3' + +services: + mysql: + extends: + file: ../../../../metricbeat/module/mysql/docker-compose.yml + service: mysql + + postgresql: + extends: + file: ../../../../metricbeat/module/postgresql/docker-compose.yml + service: postgresql diff --git a/x-pack/metricbeat/module/sql/query/_meta/data.json b/x-pack/metricbeat/module/sql/query/_meta/data.json index 1a92415be34c..799c66fe7bc9 100644 --- a/x-pack/metricbeat/module/sql/query/_meta/data.json +++ b/x-pack/metricbeat/module/sql/query/_meta/data.json @@ -1,26 +1,30 @@ { - "@timestamp":"2016-05-23T08:05:34.853Z", - "beat":{ - "hostname":"beathost", - "name":"beathost" + "@timestamp": "2017-10-12T08:05:34.853Z", + "event": { + "dataset": "sql.query", + "duration": 115000, + "module": "sql" }, - "metricset":{ - "host":"localhost", - "module":"sql", - "name":"query", - "rtt":44269 + "metricset": { + "name": "query", + "period": 10000 }, - "sql":{ - "metrics":{ - "numeric":{ - "mynumericfield":1 - }, - "string":{ - "mystringfield":"abc" - } + "service": { + "address": "172.22.0.3:3306", + "type": "sql" + }, + "sql": { + "driver": "mysql", + "metrics": { + "numeric": { + "table_rows": 6 + }, + "string": { + "engine": "InnoDB", + "table_name": "sys_config", + "table_schema": "sys" + } }, - "driver":"postgres", - "query":"select * from mytable" - }, - "type":"metricsets" + "query": "select table_schema, table_name, engine, table_rows from information_schema.tables where table_rows \u003e 0;" + } } \ No newline at end of file diff --git a/x-pack/metricbeat/module/sql/query/_meta/data_postgres.json b/x-pack/metricbeat/module/sql/query/_meta/data_postgres.json new file mode 100644 index 000000000000..0f2db40c5b12 --- /dev/null +++ b/x-pack/metricbeat/module/sql/query/_meta/data_postgres.json @@ -0,0 +1,45 @@ +{ + "@timestamp": "2017-10-12T08:05:34.853Z", + "event": { + "dataset": "sql.query", + "duration": 115000, + "module": "sql" + }, + "metricset": { + "name": "query", + "period": 10000 + }, + "service": { + "address": "172.22.0.2:5432", + "type": "sql" + }, + "sql": { + "driver": "postgres", + "metrics": { + "numeric": { + "blk_read_time": 0, + "blk_write_time": 0, + "blks_hit": 1923, + "blks_read": 111, + "conflicts": 0, + "datid": 12379, + "deadlocks": 0, + "numbackends": 1, + "temp_bytes": 0, + "temp_files": 0, + "tup_deleted": 0, + "tup_fetched": 1249, + "tup_inserted": 0, + "tup_returned": 1356, + "tup_updated": 0, + "xact_commit": 18, + "xact_rollback": 0 + }, + "string": { + "datname": "postgres", + "stats_reset": "2020-01-21 11:23:56.53" + } + }, + "query": "select * from pg_stat_database" + } +} \ No newline at end of file diff --git a/x-pack/metricbeat/module/sql/query/dsn.go b/x-pack/metricbeat/module/sql/query/dsn.go new file mode 100644 index 000000000000..2a472c3fbe5b --- /dev/null +++ b/x-pack/metricbeat/module/sql/query/dsn.go @@ -0,0 +1,42 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +package query + +import ( + "net/url" + + "github.com/go-sql-driver/mysql" + + "github.com/elastic/beats/metricbeat/mb" +) + +// ParseDSN tries to parse the host +func ParseDSN(mod mb.Module, host string) (mb.HostData, error) { + // TODO: Add support for `username` and `password` as module options + + sanitized := sanitize(host) + + return mb.HostData{ + URI: host, + SanitizedURI: sanitized, + Host: sanitized, + }, nil +} + +func sanitize(host string) string { + // Host is a standard URL + if url, err := url.Parse(host); err == nil && len(url.Host) > 0 { + return url.Host + } + + // Host is a MySQL DSN + if config, err := mysql.ParseDSN(host); err == nil { + return config.Addr + } + + // TODO: Add support for PostgreSQL connection strings and other formats + + return "(redacted)" +} diff --git a/x-pack/metricbeat/module/sql/query/query.go b/x-pack/metricbeat/module/sql/query/query.go index 884df1b9df91..3322144d993d 100644 --- a/x-pack/metricbeat/module/sql/query/query.go +++ b/x-pack/metricbeat/module/sql/query/query.go @@ -10,13 +10,12 @@ import ( "strings" "time" + "github.com/jmoiron/sqlx" "github.com/pkg/errors" "github.com/elastic/beats/libbeat/common" "github.com/elastic/beats/libbeat/common/cfgwarn" "github.com/elastic/beats/metricbeat/mb" - - "github.com/jmoiron/sqlx" ) // init registers the MetricSet with the central registry as soon as the program @@ -24,7 +23,9 @@ import ( // the MetricSet for each host defined in the module's configuration. After the // MetricSet has been created then Fetch will begin to be called periodically. func init() { - mb.Registry.MustAddMetricSet("sql", "query", New) + mb.Registry.MustAddMetricSet("sql", "query", New, + mb.WithHostParser(ParseDSN), + ) } // MetricSet holds any configuration or state information. It must implement @@ -33,9 +34,8 @@ func init() { // interface methods except for Fetch. type MetricSet struct { mb.BaseMetricSet - Driver string - Datasource string - Query string + Driver string + Query string } // New creates a new instance of the MetricSet. New is responsible for unpacking @@ -44,9 +44,8 @@ func New(base mb.BaseMetricSet) (mb.MetricSet, error) { cfgwarn.Beta("The sql query metricset is beta.") config := struct { - Driver string `config:"driver"` - Datasource string `config:"datasource"` - Query string `config:"sql_query"` + Driver string `config:"driver"` + Query string `config:"sql_query"` }{} if err := base.Module().UnpackConfig(&config); err != nil { @@ -56,7 +55,6 @@ func New(base mb.BaseMetricSet) (mb.MetricSet, error) { return &MetricSet{ BaseMetricSet: base, Driver: config.Driver, - Datasource: config.Datasource, Query: config.Query, }, nil } @@ -65,7 +63,7 @@ func New(base mb.BaseMetricSet) (mb.MetricSet, error) { // format. It publishes the event which is then forwarded to the output. In case // of an error set the Error field of mb.Event or simply call report.Error(). func (m *MetricSet) Fetch(report mb.ReporterV2) error { - db, err := sqlx.Open(m.Driver, m.Datasource) + db, err := sqlx.Open(m.Driver, m.HostData().URI) if err != nil { return errors.Wrap(err, "error opening connection") } diff --git a/x-pack/metricbeat/module/sql/query/query_integration_test.go b/x-pack/metricbeat/module/sql/query/query_integration_test.go new file mode 100644 index 000000000000..d7e04f23c406 --- /dev/null +++ b/x-pack/metricbeat/module/sql/query/query_integration_test.go @@ -0,0 +1,126 @@ +// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one +// or more contributor license agreements. Licensed under the Elastic License; +// you may not use this file except in compliance with the Elastic License. + +// +build integration + +package query + +import ( + "fmt" + "net" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + // Drivers + _ "github.com/go-sql-driver/mysql" + _ "github.com/lib/pq" + + "github.com/elastic/beats/libbeat/beat" + "github.com/elastic/beats/libbeat/tests/compose" + "github.com/elastic/beats/metricbeat/mb" + mbtest "github.com/elastic/beats/metricbeat/mb/testing" + "github.com/elastic/beats/metricbeat/module/mysql" + "github.com/elastic/beats/metricbeat/module/postgresql" +) + +type testFetchConfig struct { + Driver string + Query string + Host string + + Assertion func(t *testing.T, event beat.Event) +} + +func TestMySQL(t *testing.T) { + service := compose.EnsureUp(t, "mysql") + config := testFetchConfig{ + Driver: "mysql", + Query: "select table_schema, table_name, engine, table_rows from information_schema.tables where table_rows > 0;", + Host: mysql.GetMySQLEnvDSN(service.Host()), + Assertion: assertFieldNotContains("service.address", ":test@"), + } + + t.Run("fetch", func(t *testing.T) { + testFetch(t, config) + }) + + t.Run("data", func(t *testing.T) { + testData(t, config, "") + }) +} + +func TestPostgreSQL(t *testing.T) { + service := compose.EnsureUp(t, "postgresql") + host, port, err := net.SplitHostPort(service.Host()) + require.NoError(t, err) + + user := postgresql.GetEnvUsername() + password := postgresql.GetEnvPassword() + + config := testFetchConfig{ + Driver: "postgres", + Query: "select * from pg_stat_database", + Host: fmt.Sprintf("user=%s password=%s sslmode=disable host=%s port=%s", user, password, host, port), + Assertion: assertFieldNotContains("service.address", "password="+password), + } + + t.Run("fetch", func(t *testing.T) { + testFetch(t, config) + }) + + config = testFetchConfig{ + Driver: "postgres", + Query: "select * from pg_stat_database", + Host: fmt.Sprintf("postgres://%s:%s@%s:%s/?sslmode=disable", user, password, host, port), + Assertion: assertFieldNotContains("service.address", ":"+password+"@"), + } + + t.Run("fetch with URL", func(t *testing.T) { + testFetch(t, config) + }) + + t.Run("data", func(t *testing.T) { + testData(t, config, "./_meta/data_postgres.json") + }) +} + +func testFetch(t *testing.T, cfg testFetchConfig) { + m := mbtest.NewFetcher(t, getConfig(cfg)) + events, errs := m.FetchEvents() + require.Empty(t, errs) + require.NotEmpty(t, events) + t.Logf("%s/%s event: %+v", m.Module().Name(), m.Name(), events[0]) + + if cfg.Assertion != nil { + for _, event := range events { + cfg.Assertion(t, m.StandardizeEvent(event, mb.AddMetricSetInfo)) + } + } +} + +func testData(t *testing.T, cfg testFetchConfig, postfix string) { + m := mbtest.NewFetcher(t, getConfig(cfg)) + m.WriteEvents(t, postfix) +} + +func getConfig(cfg testFetchConfig) map[string]interface{} { + return map[string]interface{}{ + "module": "sql", + "metricsets": []string{"query"}, + "hosts": []string{cfg.Host}, + "driver": cfg.Driver, + "sql_query": cfg.Query, + } +} + +func assertFieldNotContains(field, s string) func(t *testing.T, event beat.Event) { + return func(t *testing.T, event beat.Event) { + value, err := event.GetValue(field) + assert.NoError(t, err) + require.NotEmpty(t, value.(string)) + require.NotContains(t, value.(string), s) + } +} diff --git a/x-pack/metricbeat/modules.d/aws.yml.disabled b/x-pack/metricbeat/modules.d/aws.yml.disabled index e3f727b15c5d..bfcb9d5a7865 100644 --- a/x-pack/metricbeat/modules.d/aws.yml.disabled +++ b/x-pack/metricbeat/modules.d/aws.yml.disabled @@ -20,12 +20,14 @@ - module: aws period: 5m metricsets: + - dynamodb - ebs - ec2 - elb + - lambda + - rds - sns - sqs - - rds - module: aws period: 12h metricsets: diff --git a/x-pack/metricbeat/modules.d/ibmmq.yml.disabled b/x-pack/metricbeat/modules.d/ibmmq.yml.disabled new file mode 100644 index 000000000000..93d77367a972 --- /dev/null +++ b/x-pack/metricbeat/modules.d/ibmmq.yml.disabled @@ -0,0 +1,31 @@ +# Module: ibmmq +# Docs: https://www.elastic.co/guide/en/beats/metricbeat/master/metricbeat-module-ibmmq.html + +- module: ibmmq + metricsets: ['qmgr'] + period: 10s + hosts: ['localhost:9157'] + + # This module uses the Prometheus collector metricset, all + # the options for this metricset are also available here. + metrics_path: /metrics + + # The custom processor is responsible for filtering Prometheus metrics + # not stricly related to the IBM MQ domain, e.g. system load, process, + # metrics HTTP server. + processors: + - script: + lang: javascript + source: > + function process(event) { + var metrics = event.Get("prometheus.metrics"); + Object.keys(metrics).forEach(function(key) { + if (!(key.match(/^ibmmq_.*$/))) { + event.Delete("prometheus.metrics." + key); + } + }); + metrics = event.Get("prometheus.metrics"); + if (Object.keys(metrics).length == 0) { + event.Cancel(); + } + } diff --git a/x-pack/metricbeat/modules.d/istio.yml.disabled b/x-pack/metricbeat/modules.d/istio.yml.disabled new file mode 100644 index 000000000000..feeefdffe2bc --- /dev/null +++ b/x-pack/metricbeat/modules.d/istio.yml.disabled @@ -0,0 +1,7 @@ +# Module: istio +# Docs: https://www.elastic.co/guide/en/beats/metricbeat/master/metricbeat-module-istio.html + +- module: istio + metricsets: ["mesh"] + period: 10s + hosts: ["localhost:42422"] diff --git a/x-pack/metricbeat/modules.d/sql.yml.disabled b/x-pack/metricbeat/modules.d/sql.yml.disabled index ab839d64f490..31f15547123a 100644 --- a/x-pack/metricbeat/modules.d/sql.yml.disabled +++ b/x-pack/metricbeat/modules.d/sql.yml.disabled @@ -5,9 +5,8 @@ metricsets: - query period: 10s - hosts: ["localhost"] + hosts: ["user=myuser password=mypassword dbname=mydb sslmode=disable"] driver: "postgres" - datasource: "user=myuser password=mypassword dbname=mydb sslmode=disable" sql_query: "select now()" diff --git a/x-pack/winlogbeat/winlogbeat.reference.yml b/x-pack/winlogbeat/winlogbeat.reference.yml index 767e5bd03d57..1ba54b7408d1 100644 --- a/x-pack/winlogbeat/winlogbeat.reference.yml +++ b/x-pack/winlogbeat/winlogbeat.reference.yml @@ -1263,6 +1263,14 @@ logging.files: #metrics.period: 10s #state.period: 1m +# The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` +# setting. You can find the value for this setting in the Elastic Cloud web UI. +#monitoring.cloud.id: + +# The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` +# and `monitoring.elasticsearch.password` settings. The format is `:`. +#monitoring.cloud.auth: + #================================ HTTP Endpoint ====================================== # Each beat can expose internal metrics through a HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental.