diff --git a/content/documentation/_index.md b/content/documentation/_index.md index 80a5887..e99a71f 100644 --- a/content/documentation/_index.md +++ b/content/documentation/_index.md @@ -11,7 +11,7 @@ aliases = [ # Documentation -
Updated to version 1.0.8 (see the Changelog).
+
Updated to version 1.1.0 (see the Changelog).
Welcome to IPFS Cluster documentation. The different sections of the documentation will explain how to setup, start, and operate a Cluster. Operating a production IPFS Cluster can be a daunting task if you are not familiar with concepts around [IPFS](https://ipfs.io) and peer-2-peer networking ([libp2p](https://libp2p.io) in particular). We aim to provide comprehensive documentation and guides but we are always open for improvements: documentation issues can be submitted to the [ipfs-cluster-website repository](https://github.com/ipfs-cluster/ipfs-cluster-website). diff --git a/content/documentation/collaborative/setup.md b/content/documentation/collaborative/setup.md index c847191..02cee61 100644 --- a/content/documentation/collaborative/setup.md +++ b/content/documentation/collaborative/setup.md @@ -41,7 +41,7 @@ Review the resulting configuration in your cluster peers: * `trusted_peers` should be set to the list of peer IDs in the original cluster that are under your control (or someone's trusted control). * You should have generated a cluster `secret`. It will be ok to distribute this secret later. * Depending on your Cluster setup, who you plan to join the cluster, and the level of trust on those follower peers, you can set `replication_factor_min/max`. For the general usecase, we recommend leaving at `-1` (everything pinned everywhere). The main usecase of collaborative clusters is to ensure wide distribution and replication of content. -* You can modify the `crdt/cluster_name` value to your liking, but remember to inform your followers about its value. +* You can modify the `crdt/cluster_name` value to your liking, but remember to inform your followers about its value in their configuration template. As long as different secrets are used, different clusters will not conflict if they have the same name. In principle, followers can use exactly the same configuration as your trusted peers, but we recommend tailoring a specific follower configuration as explained in the next section. @@ -62,9 +62,10 @@ Follower peers can technically use the same configuration as trusted peers but w * Set `peer_addresses` to the addresses of your trusted peers. These must be reachable whenever any follower peer starts, so ensure there is connectivity to your cluster. * Consider removing any configurations in the `api` section (`restapi`, `ipfsproxy`): follower peers should not be told how their peers APIs should look like. Misconfiguring the APIs might open unwanted security holes. `ipfs-cluster-follow` overrides any `api` configuration by creating a secure, local-only endpoint. -* Reset `connection_manager/high_water` and `low_water` to sensible defaults if you modified them for your trusted peers configuration. +* Reset `connection_manager` and `resource_manager` settings to sensible defaults if you modified them for your trusted peers configuration. * Set `follower_mode` to `true`: while non-trusted peers cannot do anything to the cluster pinset, they can still modify their own view of it, which may be very confusing. This setting (which `ipfs-cluster-follow` activates automatically) ensures useful error messages are returned when trying to perform write actions. * If you are running multiple collaborative clusters, or expect your users to do so, consider modifying the addresses defined in `listen_multiaddress` by changing the default ports to something else, hopefully unused. You can use `0` as well, so that peers choose a random free port during start, but this will cause that peers change ports on every re-start (how important that is depends on your setup). +* On clusters with hundreds of peers, pubsub chatter will increase with the number of peers. You can reduce chatter by increasing the `metric_ttl` values in the `informers` configurations and also the `cluster.monitor_ping_interval`, at the cost of getting up-to-date metrics from followers less often. After all these changes, you will have a `service.json` file that is ready to be distributed to followers. Test it first: diff --git a/content/documentation/reference/configuration.md b/content/documentation/reference/configuration.md index b064a0e..dfe2e8f 100644 --- a/content/documentation/reference/configuration.md +++ b/content/documentation/reference/configuration.md @@ -41,7 +41,7 @@ The `service.json` file holds all the configurable options for the cluster peer
If present, the `CLUSTER_SECRET` environment value is used when running `ipfs-cluster-service init` to set the cluster `secret` value.
-As an example, [this is a default `service.json` configuration file](/1.0.6_service.json). +As an example, [this is a default `service.json` configuration file](/1.1.0_service.json). The file looks like: @@ -130,6 +130,11 @@ The `leave_on_shutdown` option allows a peer to remove itself from the *peerset* |     `low_water` | `100` | The libp2p host will try to keep at least this many connections to other peers. | |     `grace_period` | `"2m0s"` | New connections will not be dropped for at least this period. | | `}` ||| +| `resource_manager {` | | A libp2p resource manager configuration object. The limits are scaled based on the given options and connections/streams are dropped when reached. Such events are logged. | +|     `enabled` | `true` | Controls whether resource limitations are enabled or fully disabled. | +|     `memory_limit_bytes` | `0` | Controls the maximum amount of RAM memory that the libp2p host should use. When set to `0`, the amount will be set to a 25% of the machine's memory or a minimum of 1GiB. Note that this affects only the libp2p resources and not the overall memory of the cluster node | +|     `file_descriptors_limit` | `0` | Controls the maximum number of file-descriptors to use. When set to `0`, the limit will be set to 50% of the total amount of file descriptors available to the process. | +| `}` ||| |`dial_peer_timeout` | `"3s"` | How long to wait when dialing a cluster peer before giving up. | |`state_sync_interval`| `"10m0s"` | Interval between automatic triggers of [`StateSync`](https://godoc.org/github.com/ipfs-cluster/ipfs-cluster#Cluster.StateSync). | |`pin_recover_interval`| `"1h0m0s"` | Interval between automatic triggers of [`RecoverAllLocal`](https://godoc.org/github.com/ipfs-cluster/ipfs-cluster#Cluster.RecoverAllLocal). This will automatically re-try pin and unpin operations that failed. | diff --git a/static/1.1.0_service.json b/static/1.1.0_service.json new file mode 100644 index 0000000..72b15ed --- /dev/null +++ b/static/1.1.0_service.json @@ -0,0 +1,268 @@ +{ + "cluster": { + "peername": "peername", + "secret": "044081064dfc37b20b44d7b362e0eafe916d681abac420e041a8db605af11e5c", + "leave_on_shutdown": false, + "listen_multiaddress": [ + "/ip4/0.0.0.0/tcp/9096", + "/ip4/0.0.0.0/udp/9096/quic" + ], + "announce_multiaddress": [], + "no_announce_multiaddress": [], + "enable_relay_hop": true, + "connection_manager": { + "high_water": 400, + "low_water": 100, + "grace_period": "2m0s" + }, + "resource_manager": { + "enabled": true, + "memory_limit_bytes": 0, + "file_descriptors_limit": 0 + }, + "dial_peer_timeout": "3s", + "state_sync_interval": "5m0s", + "pin_recover_interval": "12m0s", + "replication_factor_min": -1, + "replication_factor_max": -1, + "monitor_ping_interval": "15s", + "peer_watch_interval": "5s", + "mdns_interval": "10s", + "pin_only_on_trusted_peers": false, + "pin_only_on_untrusted_peers": false, + "disable_repinning": true, + "peer_addresses": [] + }, + "consensus": { + "crdt": { + "cluster_name": "ipfs-cluster", + "trusted_peers": [ + "*" + ], + "batching": { + "max_batch_size": 0, + "max_batch_age": "0s" + }, + "repair_interval": "1h0m0s" + } + }, + "api": { + "ipfsproxy": { + "listen_multiaddress": "/ip4/127.0.0.1/tcp/9095", + "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", + "log_file": "", + "read_timeout": "0s", + "read_header_timeout": "5s", + "write_timeout": "0s", + "idle_timeout": "1m0s", + "max_header_bytes": 4096 + }, + "pinsvcapi": { + "http_listen_multiaddress": "/ip4/127.0.0.1/tcp/9097", + "read_timeout": "0s", + "read_header_timeout": "5s", + "write_timeout": "0s", + "idle_timeout": "2m0s", + "max_header_bytes": 4096, + "basic_auth_credentials": null, + "http_log_file": "", + "headers": {}, + "cors_allowed_origins": [ + "*" + ], + "cors_allowed_methods": [ + "GET" + ], + "cors_allowed_headers": [], + "cors_exposed_headers": [ + "Content-Type", + "X-Stream-Output", + "X-Chunked-Output", + "X-Content-Length" + ], + "cors_allow_credentials": true, + "cors_max_age": "0s" + }, + "restapi": { + "http_listen_multiaddress": "/ip4/127.0.0.1/tcp/9094", + "read_timeout": "0s", + "read_header_timeout": "5s", + "write_timeout": "0s", + "idle_timeout": "2m0s", + "max_header_bytes": 4096, + "basic_auth_credentials": null, + "http_log_file": "", + "headers": {}, + "cors_allowed_origins": [ + "*" + ], + "cors_allowed_methods": [ + "GET" + ], + "cors_allowed_headers": [], + "cors_exposed_headers": [ + "Content-Type", + "X-Stream-Output", + "X-Chunked-Output", + "X-Content-Length" + ], + "cors_allow_credentials": true, + "cors_max_age": "0s" + } + }, + "ipfs_connector": { + "ipfshttp": { + "node_multiaddress": "/ip4/127.0.0.1/tcp/5001", + "connect_swarms_delay": "30s", + "ipfs_request_timeout": "5m0s", + "pin_timeout": "2m0s", + "unpin_timeout": "3h0m0s", + "repogc_timeout": "24h0m0s", + "informer_trigger_interval": 0 + } + }, + "pin_tracker": { + "stateless": { + "concurrent_pins": 10, + "priority_pin_max_age": "24h0m0s", + "priority_pin_max_retries": 5 + } + }, + "monitor": { + "pubsubmon": { + "check_interval": "15s" + } + }, + "allocator": { + "balanced": { + "allocate_by": [ + "tag:group", + "freespace" + ] + } + }, + "informer": { + "disk": { + "metric_ttl": "30s", + "metric_type": "freespace" + }, + "pinqueue": { + "metric_ttl": "30s", + "weight_bucket_size": 100000 + }, + "tags": { + "metric_ttl": "30s", + "tags": { + "group": "default" + } + } + }, + "observations": { + "metrics": { + "enable_stats": false, + "prometheus_endpoint": "/ip4/127.0.0.1/tcp/8888", + "reporting_interval": "2s" + }, + "tracing": { + "enable_tracing": false, + "jaeger_agent_endpoint": "/ip4/0.0.0.0/udp/6831", + "sampling_prob": 0.3, + "service_name": "cluster-daemon" + } + }, + "datastore": { + "pebble": { + "pebble_options": { + "cache_size_bytes": 1073741824, + "bytes_per_sync": 1048576, + "disable_wal": false, + "flush_delay_delete_range": 0, + "flush_delay_range_key": 0, + "flush_split_bytes": 4194304, + "format_major_version": 16, + "l0_compaction_file_threshold": 750, + "l0_compaction_threshold": 4, + "l0_stop_writes_threshold": 12, + "l_base_max_bytes": 134217728, + "max_open_files": 1000, + "max_concurrent_compactions": 5, + "mem_table_size": 67108864, + "mem_table_stop_writes_threshold": 20, + "read_only": false, + "wal_bytes_per_sync": 0, + "levels": [ + { + "block_restart_interval": 16, + "block_size": 4096, + "block_size_threshold": 90, + "compression": 2, + "filter_type": 0, + "filter_policy": 10, + "index_block_size": 4096, + "target_file_size": 4194304 + }, + { + "block_restart_interval": 16, + "block_size": 4096, + "block_size_threshold": 90, + "compression": 2, + "filter_type": 0, + "filter_policy": 10, + "index_block_size": 4096, + "target_file_size": 8388608 + }, + { + "block_restart_interval": 16, + "block_size": 4096, + "block_size_threshold": 90, + "compression": 2, + "filter_type": 0, + "filter_policy": 10, + "index_block_size": 4096, + "target_file_size": 16777216 + }, + { + "block_restart_interval": 16, + "block_size": 4096, + "block_size_threshold": 90, + "compression": 2, + "filter_type": 0, + "filter_policy": 10, + "index_block_size": 4096, + "target_file_size": 33554432 + }, + { + "block_restart_interval": 16, + "block_size": 4096, + "block_size_threshold": 90, + "compression": 2, + "filter_type": 0, + "filter_policy": 10, + "index_block_size": 4096, + "target_file_size": 67108864 + }, + { + "block_restart_interval": 16, + "block_size": 4096, + "block_size_threshold": 90, + "compression": 2, + "filter_type": 0, + "filter_policy": 10, + "index_block_size": 4096, + "target_file_size": 134217728 + }, + { + "block_restart_interval": 16, + "block_size": 4096, + "block_size_threshold": 90, + "compression": 2, + "filter_type": 0, + "filter_policy": 10, + "index_block_size": 4096, + "target_file_size": 268435456 + } + ] + } + } + } +}