Skip to content

Commit

Permalink
Allow --metadata-ttl without --cache and set default with --cache to …
Browse files Browse the repository at this point in the history
…60s (#855)

* Allow --metadata-ttl without --cache and set default with --cache to 60s

Signed-off-by: Alessandro Passaro <alexpax@amazon.co.uk>

* PR feedback

Signed-off-by: Alessandro Passaro <alexpax@amazon.co.uk>

* Show 0 TTL warning in background mode

Signed-off-by: Alessandro Passaro <alexpax@amazon.co.uk>

* Update docs and changelog

Signed-off-by: Alessandro Passaro <alexpax@amazon.co.uk>

* Colorize warning with owo_colors

Signed-off-by: Alessandro Passaro <alexpax@amazon.co.uk>

* Break items in the changelog

Signed-off-by: Alessandro Passaro <alexpax@amazon.co.uk>

---------

Signed-off-by: Alessandro Passaro <alexpax@amazon.co.uk>
  • Loading branch information
passaro authored May 1, 2024
1 parent 9d26b11 commit e32f890
Show file tree
Hide file tree
Showing 9 changed files with 165 additions and 62 deletions.
12 changes: 11 additions & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

21 changes: 12 additions & 9 deletions doc/CONFIGURATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,19 +261,22 @@ WantedBy=remote-fs.target
Mountpoint can optionally cache object metadata and content to reduce cost and improve performance for repeated reads to the same file.
Mountpoint can serve all file system requests from the cache, excluding listing of directory contents.

To enable caching, use the `--cache <CACHE_DIR>` command-line flag, specifying the directory in which to store cached object content.
This flag will also enable caching of metadata using a default time-to-live (TTL) of 1 second,
which can be extended with the `--metadata-ttl <SECONDS>` command-line argument.
Mountpoint will create a new subdirectory within the path that you specify,
and will remove any existing files or directories within that subdirectory at mount time and at exit.
By default, Mountpoint will limit the maximum size of the cache such that the free space on the file system does not fall below 5%,
and will automatically evict the least recently used content from the cache when caching new content.
You can instead manually configure the maximum size of the cache with the `--max-cache-size <MiB>` command-line argument.
The main command-line flag to enable caching is `--cache <CACHE_DIR>`, which specifies the directory in which to store cached object content. Mountpoint will create a new subdirectory within the path that you specify, and will remove any existing files or directories within that subdirectory at mount time and at exit. This flag will also enable caching of metadata using a default time-to-live (TTL) of 1 minute (60 seconds), which can be configured with the `--metadata-ttl` argument.

### Metadata Cache

The command-line flag `--metadata-ttl <SECONDS|indefinite|minimal>` controls the time-to-live (TTL) for cached metadata entries. It can be set to a positive numerical value in seconds, or to one of the pre-configured values of `minimal` (default configuration when not using `--cache`) or `indefinite` (metadata entries never expire).

> [!WARNING]
> Caching relaxes the strong read-after-write consistency offered by Amazon S3 and Mountpoint in its default configuration.
> Caching of metadata entries relaxes the strong read-after-write consistency offered by Amazon S3 and Mountpoint in its default configuration.
> See the [consistency and concurrency section of the semantics documentaton](./SEMANTICS.md#consistency-and-concurrency) for more details.
When configured with metadata caching, on its own or in conjunction with `--cache`, Mountpoint will typically perform fewer requests to S3, but will not guarantee that the information it reports is up to date with the content of the bucket. You can use the `--metadata-ttl` flag to choose the appropriate trade off between consistency (`--metadata-ttl minimal`) and performance/cost optimization (`--metadata-ttl indefinite`), depending on the requirements of your workload. In scenarios where the content of the S3 bucket is modified by another client and you require Mountpoint to always return up-to-date information, setting `--metadata-ttl minimal` is most appropriate. A setting of `--metadata-ttl 300` would instead allow Mountpoint to perform fewer requests to S3 by delaying updates for up to 5 min. If your workload does not require consistency, for example because the content of the S3 bucket does not change, we recommend using `--metadata-ttl indefinite`.

### Disk Cache Size

By default, Mountpoint will limit the maximum size of the cache such that the free space on the file system does not fall below 5%, and will automatically evict the least recently used content from the cache when caching new content. You can instead manually configure the maximum size of the cache with the `--max-cache-size <MiB>` command-line argument.

> [!WARNING]
> If you enable caching, Mountpoint will persist unencrypted object content from your S3 bucket at the location provided at mount.
> In order to protect your data, we recommend you restrict access to the data cache location.
Expand Down
2 changes: 1 addition & 1 deletion doc/SEMANTICS.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Mountpoint also offers optional metadata and object content caching.
See the [caching section of the configuration documentation](./CONFIGURATION.md#caching) for more information.
When opting into caching, the strong read-after-write consistency model is relaxed,
and you may see stale metadata or object data for up to the cache's metadata time-to-live (TTL),
which defaults to 1 second but can be configured higher.
which defaults to 1 minute but can be configured using the `--metadata-ttl` flag.

For example, with caching enabled, you can successfully open and read a file that has been deleted from S3 if it is already cached.
Reads to that file will either return the cached data or an error for data that is not cached,
Expand Down
8 changes: 6 additions & 2 deletions mountpoint-s3/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
## Unreleased

### New features
* Metadata caching can now be configured independently of data caching. When passing the `--metadata-ttl <seconds>` argument without also specifying `--cache <directory>`, Mountpoint will cache file metadata in memory for up to the given TTL, but will not cache object data. The `--metadata-ttl` argument also accepts two special values: `minimal` to enable only the minimal necessary caching, and `indefinite` to cache indefinitely. These modes can help accelerate workloads that touch many files but do not need to cache object data for re-use (for example, listing a directory and then reading each file within it once). ([#855](https://github.com/awslabs/mountpoint-s3/pull/855))

### Breaking changes
* No breaking changes.
* The `--metadata-ttl 0` setting is no longer supported and will be removed in a future release. The new `--metadata-ttl minimal` has a similar effect, but behaves better when latency for S3 requests is high. ([#855](https://github.com/awslabs/mountpoint-s3/pull/855))
* When using the `--cache` flag, the default metadata TTL is now set to 60 seconds (`--metadata-ttl 60`) instead of 1 second. ([#855](https://github.com/awslabs/mountpoint-s3/pull/855))

### Other changes
* The checksum algorithm to use for uploads to S3 can now be chosen with the `--upload-checksums <ALGORITHM>` command-line argument. The only supported values in this release are `crc32c` (the default, and the existing behavior) and `off`, which disables including checksums in uploads. The `off` value allows uploads to S3 implementations that do not support [additional checksums](https://aws.amazon.com/blogs/aws/new-additional-checksum-algorithms-for-amazon-s3/). This option defaults to `off` when the bucket name is an S3 on Outposts bucket access point (either an ARN or a bucket alias). ([#849](https://github.com/awslabs/mountpoint-s3/pull/849)).
* The checksum algorithm to use for uploads to S3 can now be chosen with the `--upload-checksums <ALGORITHM>` command-line argument. The only supported values in this release are `crc32c` (the default, and the existing behavior) and `off`, which disables including checksums in uploads. The `off` value allows uploads to S3 implementations that do not support [additional checksums](https://aws.amazon.com/blogs/aws/new-additional-checksum-algorithms-for-amazon-s3/). This option defaults to `off` when the bucket name is an S3 on Outposts bucket access point (either an ARN or a bucket alias). ([#849](https://github.com/awslabs/mountpoint-s3/pull/849))

## v1.6.0 (April 11, 2024)

Expand Down
3 changes: 2 additions & 1 deletion mountpoint-s3/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,18 +30,19 @@ libc = "0.2.126"
linked-hash-map = "0.5.6"
metrics = "0.22.1"
nix = { version = "0.27.1", features = ["user"] }
owo-colors = { version = "4.0.0", features = ["supports-colors"] }
regex = "1.7.1"
serde = { version = "1.0.190", features = ["derive"] }
serde_json = "1.0.95"
sha2 = "0.10.6"
supports-color = "2.0.0"
sysinfo = "0.30.7"
syslog = "6.1.0"
thiserror = "1.0.34"
time = { version = "0.3.17", features = ["macros", "formatting"] }
tracing = { version = "0.1.35", features = ["log"] }
tracing-log = "0.2.0"
tracing-subscriber = { version = "0.3.14", features = ["env-filter"] }
sysinfo = "0.30.7"

[target.'cfg(target_os = "linux")'.dependencies]
procfs = { version = "0.16.0", default-features = false }
Expand Down
68 changes: 32 additions & 36 deletions mountpoint-s3/src/cli.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,7 @@ use regex::Regex;

use crate::build_info;
use crate::data_cache::{CacheLimit, DiskDataCache, DiskDataCacheConfig, ManagedCacheDir};
use crate::fs::ServerSideEncryption;
use crate::fs::{CacheConfig, S3FilesystemConfig};
use crate::fs::{CacheConfig, S3FilesystemConfig, ServerSideEncryption, TimeToLive};
use crate::fuse::session::FuseSession;
use crate::fuse::S3FuseFilesystem;
use crate::logging::{init_logging, LoggingConfig};
Expand Down Expand Up @@ -238,21 +237,19 @@ pub struct CliArgs {

#[clap(
long,
help = "Enable caching of object metadata and content to the given directory",
help = "Enable caching of object content to the given directory and set metadata TTL to 60 seconds",
help_heading = CACHING_OPTIONS_HEADER,
value_name = "DIRECTORY",
)]
pub cache: Option<PathBuf>,

#[clap(
long,
help = "Time-to-live (TTL) for cached metadata in seconds [default: 1s]",
value_name = "SECONDS",
value_parser = parse_ttl_seconds,
help = "Time-to-live (TTL) for cached metadata in seconds [default: minimal, or 60 seconds if --cache is set]",
value_name = "SECONDS|indefinite|minimal",
help_heading = CACHING_OPTIONS_HEADER,
requires = "cache",
)]
pub metadata_ttl: Option<Duration>,
pub metadata_ttl: Option<TimeToLive>,

#[clap(
long,
Expand Down Expand Up @@ -600,9 +597,9 @@ pub fn create_s3_client(args: &CliArgs) -> anyhow::Result<(S3CrtClient, EventLoo

if args.cache.is_some() {
user_agent.value("mp-cache");
if let Some(ttl) = args.metadata_ttl {
user_agent.key_value("mp-cache-ttl", &ttl.as_secs().to_string());
}
}
if let Some(ttl) = args.metadata_ttl {
user_agent.key_value("mp-cache-ttl", &ttl.to_string());
}

let mut client_config = S3ClientConfig::new()
Expand Down Expand Up @@ -689,15 +686,31 @@ where

let prefetcher_config = Default::default();

if let Some(path) = args.cache {
let metadata_cache_ttl = args.metadata_ttl.unwrap_or(Duration::from_secs(1));
filesystem_config.cache_config = CacheConfig {
serve_lookup_from_cache: true,
dir_ttl: metadata_cache_ttl,
file_ttl: metadata_cache_ttl,
..Default::default()
};
let mut metadata_cache_ttl = args.metadata_ttl.unwrap_or_else(|| {
if args.cache.is_some() {
// When the data cache is enabled, use 1min as metadata-ttl.
TimeToLive::Duration(Duration::from_secs(60))
} else {
TimeToLive::Minimal
}
});
if matches!(metadata_cache_ttl, TimeToLive::Duration(Duration::ZERO)) {
const ZERO_TTL_WARNING: &str = "The '--metadata-ttl 0' setting is no longer supported, is now interpreted as 'minimal', and will be removed in a future release. Use '--metadata-ttl minimal' instead";
tracing::warn!("{}", ZERO_TTL_WARNING);
if !args.foreground {
// Ensure warning is visible even when not redirecting logs to stdout.
use owo_colors::{OwoColorize, Stream::Stderr, Style};
eprintln!(
"{}: {}",
"warning".if_supports_color(Stderr, |text| text.style(Style::new().yellow().bold())),
ZERO_TTL_WARNING
);
}
metadata_cache_ttl = TimeToLive::Minimal;
}
filesystem_config.cache_config = CacheConfig::new(metadata_cache_ttl);

if let Some(path) = args.cache {
let cache_config = match args.max_cache_size {
// Fallback to no data cache.
Some(0) => None,
Expand Down Expand Up @@ -862,23 +875,6 @@ fn parse_bucket_name(bucket_name: &str) -> anyhow::Result<String> {
Ok(bucket_name.to_owned())
}

fn parse_ttl_seconds(seconds_str: &str) -> anyhow::Result<Duration> {
const MAXIMUM_TTL_YEARS: u64 = 100;
const MAXIMUM_TTL_SECONDS: u64 = MAXIMUM_TTL_YEARS * 365 * 24 * 60 * 60;

let seconds = seconds_str.parse()?;
if seconds > MAXIMUM_TTL_SECONDS {
return Err(anyhow!(
"TTL must not be greater than {}s (~{} years)",
MAXIMUM_TTL_SECONDS,
MAXIMUM_TTL_YEARS
));
}

let duration = Duration::from_secs(seconds);
Ok(duration)
}

fn env_region() -> Option<String> {
env::var_os("AWS_REGION").map(|val| val.to_string_lossy().into())
}
Expand Down
24 changes: 24 additions & 0 deletions mountpoint-s3/src/fs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ pub use crate::inode::InodeNo;
mod error;
pub use error::{Error, ToErrno};

mod time_to_live;
pub use time_to_live::TimeToLive;

pub const FUSE_ROOT_INODE: InodeNo = 1u64;

#[derive(Debug)]
Expand Down Expand Up @@ -335,6 +338,27 @@ impl Default for CacheConfig {
}
}

impl CacheConfig {
/// Construct cache configuration settings from metadata TTL.
pub fn new(metadata_ttl: TimeToLive) -> Self {
match metadata_ttl {
TimeToLive::Minimal => Default::default(),
TimeToLive::Indefinite => Self {
serve_lookup_from_cache: true,
file_ttl: TimeToLive::INDEFINITE_DURATION,
dir_ttl: TimeToLive::INDEFINITE_DURATION,
..Default::default()
},
TimeToLive::Duration(ttl) => Self {
serve_lookup_from_cache: true,
file_ttl: ttl,
dir_ttl: ttl,
..Default::default()
},
}
}
}

#[derive(Debug)]
pub struct S3FilesystemConfig {
/// Kernel cache config
Expand Down
69 changes: 69 additions & 0 deletions mountpoint-s3/src/fs/time_to_live.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
use std::{fmt::Display, num::ParseIntError, str::FromStr, time::Duration};

use thiserror::Error;

/// User-configurable time-to-live (TTL) for metadata caching.
#[derive(Debug, Clone, Copy)]
pub enum TimeToLive {
Minimal,
Indefinite,
Duration(Duration),
}

#[derive(Error, Debug)]
pub enum TimeToLiveError {
#[error("TTL must be a valid number of seconds, or 'indefinite', or 'minimal'")]
InvalidInt(#[from] ParseIntError),
#[error(
"TTL must not be greater than {}s (~{} years), or be 'indefinite', or 'minimal'",
TimeToLive::MAXIMUM_TTL_SECONDS,
TimeToLive::MAXIMUM_TTL_YEARS
)]
TooLarge,
}

impl TimeToLive {
const MINIMAL: &'static str = "minimal";
const INDEFINITE: &'static str = "indefinite";

// Set an upper bound that is practically "forever", but does not cause overflow
// when added to `Instant::now()` (as `u64::MAX` would).
const MAXIMUM_TTL_YEARS: u64 = 100;
const MAXIMUM_TTL_SECONDS: u64 = Self::MAXIMUM_TTL_YEARS * 365 * 24 * 60 * 60;

pub const INDEFINITE_DURATION: Duration = Duration::from_secs(Self::MAXIMUM_TTL_SECONDS);

pub fn new_from_str(s: &str) -> Result<Self, TimeToLiveError> {
match s {
Self::MINIMAL => Ok(Self::Minimal),
Self::INDEFINITE => Ok(Self::Indefinite),
_ => {
let seconds = s.parse()?;
if seconds > Self::MAXIMUM_TTL_SECONDS {
return Err(TimeToLiveError::TooLarge);
}

let duration = Duration::from_secs(seconds);
Ok(Self::Duration(duration))
}
}
}
}

impl Display for TimeToLive {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Minimal => f.write_str(Self::MINIMAL),
Self::Indefinite => f.write_str(Self::INDEFINITE),
Self::Duration(duration) => write!(f, "{}s", duration.as_secs()),
}
}
}

impl FromStr for TimeToLive {
type Err = TimeToLiveError;

fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::new_from_str(s)
}
}
20 changes: 8 additions & 12 deletions mountpoint-s3/tests/cli.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
use assert_cmd::prelude::*; // Add methods on commands
use predicates::prelude::*; // Used for writing assertions
use std::{fs, os::unix::prelude::PermissionsExt, process::Command}; // Run programs
use test_case::test_case;

/// Regular expression for something that looks mostly like a SemVer version.
/// See https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string.
Expand Down Expand Up @@ -214,36 +215,31 @@ fn allow_other_conflict() -> Result<(), Box<dyn std::error::Error>> {
#[test]
fn max_ttl_exceeded() -> Result<(), Box<dyn std::error::Error>> {
let dir = assert_fs::TempDir::new()?;
let cache_dir = assert_fs::TempDir::new()?;
let mut cmd = Command::cargo_bin("mount-s3")?;

const INVALID_TTL: u64 = 150 * 365 * 24 * 60 * 60;
cmd.arg("test-bucket")
.arg(dir.path())
.arg("--cache")
.arg(cache_dir.path())
.arg("--metadata-ttl")
.arg(format!("{}", INVALID_TTL));
let error_message = "'--metadata-ttl <SECONDS>': TTL must not be greater than 3153600000s (~100 years)";
let error_message =
"'--metadata-ttl <SECONDS|indefinite|minimal>': TTL must not be greater than 3153600000s (~100 years)";
cmd.assert().failure().stderr(predicate::str::contains(error_message));

Ok(())
}

#[test]
fn invalid_ttl() -> Result<(), Box<dyn std::error::Error>> {
#[test_case("20000000000000000000")]
#[test_case("infinite")]
fn invalid_ttl(invalid_ttl: &str) -> Result<(), Box<dyn std::error::Error>> {
let dir = assert_fs::TempDir::new()?;
let cache_dir = assert_fs::TempDir::new()?;
let mut cmd = Command::cargo_bin("mount-s3")?;

const INVALID_TTL_STRING: &str = "20000000000000000000";
cmd.arg("test-bucket")
.arg(dir.path())
.arg("--cache")
.arg(cache_dir.path())
.arg("--metadata-ttl")
.arg(INVALID_TTL_STRING);
let error_message = "'--metadata-ttl <SECONDS>': number too large to fit in target type";
.arg(invalid_ttl);
let error_message = "'--metadata-ttl <SECONDS|indefinite|minimal>': TTL must be a valid number of seconds, or";
cmd.assert().failure().stderr(predicate::str::contains(error_message));

Ok(())
Expand Down

0 comments on commit e32f890

Please sign in to comment.