Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mongobeat to list of community beats #3156

Merged
merged 1 commit into from
Dec 9, 2016

Conversation

scottcrespo
Copy link

@scottcrespo scottcrespo commented Dec 9, 2016

mongobeat

Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()

Still an early work in progress as I'm learning more about the beats platform - so far I really enjoy it! I've done my best to follow the contributor guide and conventions. Please let me know if you have any feedback or would like me to make some changes.

Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()
@elasticsearch-release
Copy link

Jenkins standing by to test this. If you aren't a maintainer, you can ignore this comment. Someone with commit access, please review this and clear it for Jenkins to run.

1 similar comment
@elasticmachine
Copy link
Collaborator

Jenkins standing by to test this. If you aren't a maintainer, you can ignore this comment. Someone with commit access, please review this and clear it for Jenkins to run.

@ruflin ruflin merged commit 77ef379 into elastic:master Dec 9, 2016
@ruflin
Copy link
Contributor

ruflin commented Dec 9, 2016

@scottcrespo Thanks a lot for creating a beat. I'm curious how it compares to our mongodb module in Metricbeat? It seems you have some additional db stats? Perhaps it could make sense to merge the two? https://github.com/elastic/beats/tree/master/metricbeat/module/mongodb

@scottcrespo
Copy link
Author

scottcrespo commented Dec 9, 2016

Hey @ruflin,

Thanks for merging!

I think the most sensible things to merge from mongobeat are the additional documents returned by MongoDB. For starters, I'd be happy to open a new issue/proposal and submit code to metricbeat to collect db stats.

There are also some other differences between the two, which I'll briefly describe. If any are of interest to you, please let me know, and I can submit more detailed proposals for you to consider.


DB Stats

As you've mentioned, the data difference (at this time) is that mongobeat also collects db stats.

For each database in a mongo instance, an event is published after calling db.stats(). Below is an example document.

    "db_stats": {
      "avg_obj_size": 0,
      "collections": 4,
      "data_size": 1168,
      "db": "admin",
      "file_size": 67108864,
      "index_size": 24528,
      "indexes": 3,
      "num_extents": 4,
      "objects": 11,
      "ok": 1,
      "storage_size": 28672
    },
    "type": "mongobeat.db_stats"
}

Additional Mongo Document Types

I plan to add some additional metrics that can be optionally collected, such as collStats and shardConnPoolStats.

So, I can propose and submit for the other mongo document types as they're implemented.

Concurrency Patterns

I haven't looked too deep into the concurrency patterns used by metric beat. But essentially mongobeat monitors each document type (i.e. status and dbstats) on separate goroutines, so one call can't block the other.

Centralized Mode

mongobeat can optionally run as a centralized monitoring service. It connects to "seed instances," discovers additional nodes in the cluster, and establishes direct connections with each instance. Then, each node in the cluster is monitored on its own goroutine so a non-responsive/latent instance doesn't block the reporting of others.

Soon, I'm going into implement periodic discovery, so every n seconds mongobeat will search for additional nodes in the cluster.

Minor-Version Compatibility

mongobeat is attempting to support minor-version compatibility across Mongo 3.x (i.e. 3.0, 3.2, 3.4). Thus, each minor version has the same document schema in Elasticsearch, and Kibana users can compare data from different mongo versions without data discrepancies. It will also allow for easier sharing of visualizations and dashboards.

Mongo 2.0 Support

I'll probably add support for Mongo2.x data schemas at some point. It would be an additional benefit if 2.x data can be transformed into a common format with 3.x documents for complete backwards compatibility.

Major-Version Compatibility

I've created an explicit modeling layer with Golang structs instead using the schema package. My intuition is that once 3.x and 2.x are fully implemented, we may be able to use struct composition to create a cross-major-version compatible data schema.

@scottcrespo scottcrespo deleted the mongobeat-listing branch December 9, 2016 17:32
@ruflin
Copy link
Contributor

ruflin commented Dec 12, 2016

@scottcrespo Thanks for the detailed elaboration of your thoughts. Here some answers:

  • DB Stats: It would be awesome if you could contribute the db_stats module. We would be more then happy to have it as part of Metricbeat.
  • collStats and shardConnPoolStats seems definitively worth 2 additional Metricsets for the MongoDB module.
  • Concurrency: There is one go routing per metricset + host so they will not conflict with each other: https://github.com/elastic/beats/blob/master/metricbeat/beater/module.go#L131
  • Centralized mode: We recommend to install metricbeat on the edge nodes directly with for example mongodb. So each instance of a mongodb cluster is monitored by an elasticsearch instance. Centralised monitoring would be possible through definining multiple hosts, but it does not do a dynamic lookup.
  • Version compatibility: We also have the intention of making the schema compatible across versions. One tricky thing we encountered so far in MongoDB is that the schema not only changes based on the version but also on the storage system used: Metricbeat MongoDB module improvements #2999
  • 2.x Support: I'm not aware of the user bases of the different mongodb versions, but as usual I'm quite sure there are lots of people still on 2.x. So having support for it would be nice.
  • Major-version compatibility: Can you elaborate / link to the way you used the modeling? Sounds very interesting.

dedemorton pushed a commit to dedemorton/beats that referenced this pull request Dec 13, 2016
Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()
ruflin pushed a commit that referenced this pull request Dec 13, 2016
Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()
@scottcrespo
Copy link
Author

@ruflin

Thanks for all the feedback!

DB Stats

I'll open an issue and start working on DB Stats for metricbeat.

collStats and sharedConnPoolStats

Will follow later after DbStats

Concurrency

Sweet! We implemented the same pattern! Great validation of our approach.

Centralized Mode:

Completely agree the proper way to use any agent is to collocate the agent with the thing you want to monitor. The challenge comes with managed hosting services where you may not have direct access to the host(s) to install a beat, nor know what nodes are in the cluster without discovery. Especially for databases, more deployments are moving to managed platforms.

But, whether Beats wants to go after this user segment is another question. I'm also new to the community and not really up to speed on the overall strategy.

Worth exploring further? I haven't used a lot of Mongo as a Service platforms, but here are a few:

MongoDB Atlas
Rackspace/Object Rocket

Version Compatibility

According to the docs, there are 3 storage engines. Not sure if the in-memory storage is reflected in serverStatus output, but worth looking into. I think output associated with the different storage engines will result in inevitable data discrepancies, so not much you can do there....

2.x

I still have to use 2.x for some projects, sadly. So I'm one of those users.

Major Version Compatibility

Mongobeat Model Layer

It's just intuition at this point, but as we try to transform documents into a standardized schema for elasticsearch, having an explicit modeling layer might be able to help make the transformations via a interfaces, struct composition, and instance methods. These may or may not be mutually exclusive with the schema package.

For example, I could have interface Transformer, parent struct ServerStatusV3 with an anonymous struct member AssertsV3. The actual beat "controller" would marshal mongo's output to either ServerStatusV3 for mongo3.x instances, and ServerStatusV2 for mongo2.x instances.

Model Layer

type Tranformer interface {
  GetStandardSchema() map[string]interface{}
}

// ServerStatusV3 implements transformer interface
type ServerStatusV3 struct {
  // AssertsV3 is an anonymous struct
  AssertsV3
}

func (s *ServerStatusV3) GetStandardSchema() map[string]interface{} {
  
  result := map[string]interface{}
    "asserts": s.AssertsV3.GetSandardSchema(),

    ... same for additional fields
  }

  return result  
}

// AssertsV3 implements transformer interface
type AssertsV3 struct {
  field1 uint
  field2 uint
  ...
}

func (ac *AssertsV3) GetStandardSchema() map[string]interface{} {
  transformation logic goes here
}


"Controller" Layer

func (bt *Mongobeat) Run(b *beat.Beat) error {
     ...
    mongoOutputRaw := makeDBCall()

    if versionMajor == "2" {
      results = marshalData to ServerStatusV2
    } else {
      results = marshal data to ServerStatusV3

    resultsMapStrIface := results.GetStandardSchema()

    sendEvent()
}

@ruflin
Copy link
Contributor

ruflin commented Dec 15, 2016

  • Contributions: 🎉 🎈
  • Hosted solutions: Metricbeat does already support it, it is just not the recommended way. If someone provides a hosted solution, I would hope they also provide the monitoring with it, so no need for the client to have it normally, but sometimes still good to have it ;-)
  • Schema: Our schema implementation applies in the generic cases. There are still cases it can't handle. We will have to add logic for versions pretty soon, but we did not add it in the first version as it adds complexity. Good to see you are also thinking about this problem and already have some solutions.

@scottcrespo
Copy link
Author

Contributions

Forum: https://discuss.elastic.co/t/metricbeat-mongodb-report-db-stats/69218

Issue: #3205

=)

Hosted Solutions

I agree with what you're saying: 3rd party Hosted solutions typically provide analytics for MongoDB. And, an agent should be collocated with the system it monitors.

Schema

Agreed. It makes a lot of sense for an early version not to take on too much complexity.

To implement db.stats() for Metricbeat's MongoDB module, I plan on using the schema package and focus on the generic case (to maintain consistency). Backwards compatibility (i.e. Mongo 2.x) could follow in a subsequent release.

Conclusion

Looking forward to speaking with you in the new issues I've opened for db.stats !!!

suraj-soni pushed a commit to suraj-soni/beats that referenced this pull request Dec 15, 2016
Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()
monicasarbu pushed a commit that referenced this pull request Dec 19, 2016
* Rewrite elasticsearch connection URL (#3058)
* Fix metricbeat service times-out at startup (#3056)
* remove init collecting of processes
* add changelog entry

* Clarify that json.message_key is optional in Filebeat (#3055)

I reordered the options based on importance (I put the optional config setting at the end).

And I changed the wording to further clarify that the `json.message_key` setting is optional.

Fixes #2864

* Document add_cloud_metadata processor (#3054)

Fixes #2791

* Remove process.GetProcStatsEvents as not needed anymore (#3066)

* Fix testing for 2x releases (#3057)

* Update docker files to the last major with the most recent minor and bugfix version
* Renamed files to Dockerfile-2x to not have to be renamed every time a new bugfix is released
* Remove scripts and config files which are not needed anymore

To run testsuite for 2x releases, run: `TESTING_ENVIRONMENT=2x make testsuite`

* Remove old release notes files from packetbeat docs (#3067)

* Update go-ucfg (#3045)

- Update go-ucfg
- add support for parsing lists/dictionaries from environment variables and via
  `-E` flag

* Parse elasticsearch URL before logging it (#3075)

* Fix the total CPU time in the Docker dashboard (#3085) (#3086)

Part of #2629. The name of the field was changed, but not in the dashboard.
(cherry picked from commit e271d9f)

* Switch partition metricset from client to broker (#3029)

Update kafka broker query

- Switch paritition metricset from client to broker
- on connect try to find the broker id (address must match advertised host).
- check broker is leader before querying offsets
- query offsets for all replicas
- remove 'isr' from event, and replace with boolean flag `insync_replica`
- replace `replicas` from event with per event `replica`-id
- update sarama to get offset per replica id

* Make error fields optional in partition event (#3089)

* Update data.json

* Make it clear in the docs that publish_async is still experimental (#3096)

Remove example for publish_async from the docs

* Remove metadata prefix from config as not needed (#3095)

* Remove left over string in template test (#3102)

* Fix typo in Dockerfile comment (#3105)

* Document batch_read_size is experimental in Winlogbeat

* Add benchmark test for batch_read_size in Winlogbeat (#3107)

* Fix ES 2.x integration test (#3115)

There was a test that was loading a mock template, and this template
was assuming 5.x.

* Pass `--always-copy` to virtualenv (#3082)

virtualenv creates symlinks so `make setup` fails when ran on a network mounted
fs. `--always-copy` copies files to the destination dir rather than symlinking.

* Add project prefix for composer environment (#3116)

This prefix is need to run tests with different environments in parallel so one does not affect the other. Like this 2x and snapshot builds should be able to coexist

* Reduce allocations in UTF16 conversion (#3113)

When decoding a UTF16 string contained in a buffer larger than just the string, more space was allocated than required.

```
BenchmarkUTF16BytesToString/simple_string-4         	 2000000	       846 ns/op	     384 B/op	       3 allocs/op
BenchmarkUTF16BytesToString/larger_buffer-4         	 2000000	       874 ns/op	     384 B/op	       3 allocs/op
BenchmarkUTF16BytesToString_Original/simple_string-4         	 2000000	       840 ns/op	     384 B/op	       3 allocs/op
BenchmarkUTF16BytesToString_Original/larger_buffer-4         	 1000000	      3055 ns/op	    8720 B/op	       3 allocs/op
```

```
PS C:\Gopath\src\github.com\elastic\beats\winlogbeat> go test -v github.com/elastic/beats/winlogbeat/eventlog -run ^TestBenchmarkBatchReadSize$ -benchmem -benchtime 10s -benchtest
=== RUN   TestBenchmarkBatchReadSize
--- PASS: TestBenchmarkBatchReadSize (68.04s)
        bench_test.go:100: batch_size=10, total_events=20000, batch_time=5.682627ms, events_per_sec=1759.7494961397256, bytes_alloced_per_event=44 kB, total_allocs=4923840
        bench_test.go:100: batch_size=100, total_events=30000, batch_time=53.850879ms, events_per_sec=1856.9799018508127, bytes_alloced_per_event=44 kB, total_allocs=7354285
        bench_test.go:100: batch_size=500, total_events=25000, batch_time=271.118774ms, events_per_sec=1844.2101689350366, bytes_alloced_per_event=43 kB, total_allocs=6125665
        bench_test.go:100: batch_size=1000, total_events=30000, batch_time=558.03918ms, events_per_sec=1791.9888707455987, bytes_alloced_per_event=43 kB, total_allocs=7350324
PASS
ok      github.com/elastic/beats/winlogbeat/eventlog    68.095s

PS C:\Gopath\src\github.com\elastic\beats\winlogbeat> go test -v github.com/elastic/beats/winlogbeat/eventlog -run ^TestBenchmarkBatchReadSize$ -benchmem -benchtime 10s -benchtest
=== RUN   TestBenchmarkBatchReadSize
--- PASS: TestBenchmarkBatchReadSize (71.85s)
        bench_test.go:100: batch_size=10, total_events=30000, batch_time=5.713873ms, events_per_sec=1750.1264028794478, bytes_alloced_per_event=25 kB, total_allocs=7385820
        bench_test.go:100: batch_size=100, total_events=30000, batch_time=52.454484ms, events_per_sec=1906.4147118480853, bytes_alloced_per_event=24 kB, total_allocs=7354318
        bench_test.go:100: batch_size=500, total_events=25000, batch_time=260.56659ms, events_per_sec=1918.8952812407758, bytes_alloced_per_event=24 kB, total_allocs=6125688
        bench_test.go:100: batch_size=1000, total_events=30000, batch_time=530.468816ms, events_per_sec=1885.124949550286, bytes_alloced_per_event=24 kB, total_allocs=7350360
PASS
ok      github.com/elastic/beats/winlogbeat/eventlog    71.908s
```

* Fix for errno 1734 when calling EvtNext (#3112)

When reading a batch of large event log records the Windows function
EvtNext returns errno 1734 (0x6C6) which is RPC_S_INVALID_BOUND ("The
array bounds are invalid."). This seems to be a bug in Windows because
there is no documentation about this behavior.

This fix handles the error by resetting the event log subscription
handle (so events are not lost) and then retries the EvtNext call
with maxHandles/2.

Fixes #3076

* Fetch container stats in parallel (#3127)

Currently fetching container stats is very slow as each request takes up to 2 seconds. To improve the fetching time if lots of containers are around, this creates the rrequests in parallel. The main downside is that this opens lots of connections. This fix should only temporary until the bulk api is available: moby/moby#25361

* Fix heartbeat not accepting `mode` parameter (#3128)

* Remove fixed container names as not needed (#3122)

Add beat name to project namespace

* This makes sure different beats environment do not affect each other for example when Kafka is used
* It also allows to run the testsuites of all the beats in parallel

Introduce `stop-environment` command to stop all containers

* Add doc for decode_json_fields processor (#3110)

* Add doc for decode_json_fields processor
* Use changed param names
* Add example of decode_json_fields processor
* Fix intro language about processors

* Adding AmazonBeat to community beats (#3125)

I created a basic version of amazonbeat, which reads data from an amazon product periodically. This beat does not yet publish to elasticsearch.

* Reuse a byte buffer for holding XML (#3118)

Previously the data was read into a []byte encoded as UTF16. Then that
data was converted to []uint16 so that we can use utf16.Decode(). Then
the []rune slice was converted to a string which did another data copy.
The XML was unmarshalled from the string.

This PR changes the code to convert the UTF16 []byte directly to UTF8 and
puts the result into a reusable bytes.Buffer. The XML is then unmarshalled
directly from the data in buffer.

```
BenchmarkUTF16ToUTF8-4   	 2000000	      1044 ns/op        4 B/op      1 allocs/op
```

```
git checkout 6ba7700
PS > go test github.com/elastic/beats/winlogbeat/eventlog -run TestBenc -benchtest -benchtime 10s -v
=== RUN   TestBenchmarkBatchReadSize
--- PASS: TestBenchmarkBatchReadSize (67.89s)
        bench_test.go:100: batch_size=10, total_events=30000, batch_time=5.119626ms, events_per_sec=1953.2676801000696, bytes_alloced_per_event=44 kB, total_allocs=7385952
        bench_test.go:100: batch_size=100, total_events=30000, batch_time=51.366271ms, events_per_sec=1946.802795943665, bytes_alloced_per_event=44 kB, total_allocs=7354448
        bench_test.go:100: batch_size=500, total_events=25000, batch_time=250.974356ms, events_per_sec=1992.2354138842775, bytes_alloced_per_event=43 kB, total_allocs=6125812
        bench_test.go:100: batch_size=1000, total_events=30000, batch_time=514.796113ms, events_per_sec=1942.5166094834128, bytes_alloced_per_event=43 kB, total_allocs=7350550
PASS
ok      github.com/elastic/beats/winlogbeat/eventlog    67.950s

git checkout 833a806 (#3113)
PS > go test github.com/elastic/beats/winlogbeat/eventlog -run TestBenc -benchtest -benchtime 10s -v
=== RUN   TestBenchmarkBatchReadSize
--- PASS: TestBenchmarkBatchReadSize (65.69s)
        bench_test.go:100: batch_size=10, total_events=30000, batch_time=4.858277ms, events_per_sec=2058.3429063431336, bytes_alloced_per_event=25 kB, total_allocs=7385847
        bench_test.go:100: batch_size=100, total_events=30000, batch_time=51.612952ms, events_per_sec=1937.49816906423, bytes_alloced_per_event=24 kB, total_allocs=7354362
        bench_test.go:100: batch_size=500, total_events=25000, batch_time=241.713826ms, events_per_sec=2068.561853801445, bytes_alloced_per_event=24 kB, total_allocs=6125757
        bench_test.go:100: batch_size=1000, total_events=30000, batch_time=494.961643ms, events_per_sec=2020.3585755431961, bytes_alloced_per_event=24 kB, total_allocs=7350474
PASS
ok      github.com/elastic/beats/winlogbeat/eventlog    65.747s

This PR (#3118)
PS > go test github.com/elastic/beats/winlogbeat/eventlog -run TestBenc -benchtest -benchtime 10s -v
=== RUN   TestBenchmarkBatchReadSize
--- PASS: TestBenchmarkBatchReadSize (65.80s)
        bench_test.go:100: batch_size=10, total_events=30000, batch_time=4.925281ms, events_per_sec=2030.341009985014, bytes_alloced_per_event=14 kB, total_allocs=7295817
        bench_test.go:100: batch_size=100, total_events=30000, batch_time=48.976134ms, events_per_sec=2041.8108134055658, bytes_alloced_per_event=14 kB, total_allocs=7264329
        bench_test.go:100: batch_size=500, total_events=25000, batch_time=250.314316ms, events_per_sec=1997.4886294557757, bytes_alloced_per_event=14 kB, total_allocs=6050719
        bench_test.go:100: batch_size=1000, total_events=30000, batch_time=499.861923ms, events_per_sec=2000.5524605641945, bytes_alloced_per_event=14 kB, total_allocs=7260400
PASS
ok      github.com/elastic/beats/winlogbeat/eventlog    65.856s
```

* Fix make package for community beats (#3094)

gopkg.in needs to be copied from the vendor directory of libbeat in the vendor directory

* Auto generate modules list (#3131)

This is to ensure no modules are forgotten in the future

* Remove duplicated enabled entry from redis config (#3132)

* Remove --always-copy from virtualenv and make it a param (#3136)

In #3082 `--always-copy` was introduced. This caused issue on build on some operating systems. This PR reverts the change but makes `VIRTUALENV_PARAMS` a variable which can be passed to the Makefile. This allows anyone to set `--always-copy` if needed.

* Adjust script to generate fields of type geo_point (#3147)

* Fix for broken dashboard dependency in Cassandra Dashboard (#3146)

The Cassandra Dashboard was linking to the wrong Cassandra visualisation. Some left over with : in the names were still inside

Closes #3140

* Fix quotes (#3142)

* Fix a print statement to be python 3 compliant (#3144)

* Remove -prerelease from the repo names (#3153)

* Add mongobeat to list of community beats (#3156)

Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()

* Update to most recent latest builds (#3161)

* Merge snapshot and latest build for Logstash into 1 docker file

* Pass certificate options to import dashboards script (#3139)

* Pass certificate options to import dashboards script

-cert for client certificate
-key for client certificate key
-cacert for certificate authority

* Add -insecure flag to import_dashboards (#3163)

* Improve speed and stability of CI builds (#3162)

Loading and creating docker images takes quite a bit of time on the travis builds. Especially calls like apt-get update and install take lots of time and bandwidth and fail from time to time, as a host is not available.

Following actions were taken:

* Fake Kibana container is now based on alpine
* Redis stunnel container was also switched to alpine

* Add enabled config for prospectors (#3157)

The enabled config allows easily to enable and disable a specific prospector. This is consistent with metricbeat where each modules has an enabled config. By default enabled is set to true.

* Prototype Filebeat modules implementation (#3158)

Contains the Nginx module, including the fields.yml and several
pipelines.

* Add edits for docker module docs (#3176)

* Restructure and edit processors content (#3160)

* Cleaned up Changelog in master (#3181)

Added the 5.1.0 and 5.1.1 sections, removed duplicates.

* metricbeat: enhance kafka broker matching (#3129)

- compare broker names to hostname
- try to lookup metricbeat host machine fqdn and compare to broker name
- compare all ips of local machine with resolved broker name ips

* Filebeat MySQL module (#3171)

* Contains slowlog and errors filesets
* Test files for two mysql versions (5.5 and 5.7)
* Add support for built-in variables (e.g. `builtin.hostname`)
* Contains a sample Kibana dashboard

Part of #3159.

* Fix #3167 change ownership of files in build/ (#3168)

Add a new Makefile rule: fix-permissions

fix-permissions runs a docker container that changes the ownership
of all files from root to the user that runs the Makefile

* Updating documentation to add udplogbeat (#3190)

* Packer customize package info (#3188)

* packer: Enable overriding of vendor and license
* packer: customize URL of documentation link
* packer: location of readme.md.j2 folder can be specified with PACKER_TEMPLATES_DIR

* Filebeat syslog module (#3191)

* Basic parsing of syslog fields
* Supports multiline messages if the lines after the first one start
  with a space.
* Contains a simple Kibana dashboard

* Deprecate filters option in metrictbeat (#3173)

* Add support for multiple paths per fileset (#3195)

We generally need more than one path per OS, because the logs location
is not always the same. For example, depending on the linux distribution
and how you installed it, MySQL can have it's error logs in a number of
default "paths". The solution is to configure them all, which means that
Filebeat might try to access unexisting folders.

This also improves the python prototype to accept multiple modules and
to accept namespaced parameters. E.g.:

./filebeat.py --modules=nginx,syslog -M nginx.access.paths=...

* case insensitive hostname comparison in kafka broker matching (#3193)

- re-use common.LocalIPAddrs in partition module for resolving IPs
- add missing net.IPAddr type switch to common.LocalIPAddrs
- update matching to extract addresses early on using strings.ToLower
  => ensure case insensitive matching by lowercasing

* Adds a couchbase module for metricbeat (#3081)

* Export cpu cores (#3192)

* Fix: Request headers with split_cookies enabled (#3065)

* Add 3140 to changelog (#3207) (#3208)

(cherry picked from commit 0f4103f)
dedemorton pushed a commit to dedemorton/beats that referenced this pull request Dec 21, 2016
Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()
leweafan pushed a commit to leweafan/beats that referenced this pull request Apr 28, 2023
Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()
leweafan pushed a commit to leweafan/beats that referenced this pull request Apr 28, 2023
Mongobeat discovers instances in a mongo cluster and can be configured to ship multiple document types - from the commands db.stats() and db.serverStatus()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants