Skip to content

v2.6.0

Compare
Choose a tag to compare
@github-actions github-actions released this 20 Sep 18:12
· 668 commits to master since this release
v2.6.0
821a08a

Caddy 2.6

This is our biggest release since Caddy 2.

Caddy 2 changed the way the world serves the Web. By providing an online config API, automatic HTTPS, unlimited extensibility, certificate automation at scale, modern protocols, sane defaults, and an unrivaled developer experience, we boldly raised the bar for web servers.

Now with Caddy 2.6, we're doing it again. Caddy 2.6 is the first general-purpose web server to seamlessly enable the newly-standardized HTTP/3 protocol for all configurations by default. We've virtualized the file system so you can serve content from anywhere or anything. New event features let you observe and control Caddy's internals with custom actions. Caddy is more useful than ever for developers with its enhanced CLI tooling and features. And it's faster than ever with non-trivial performance improvements. We think you will love this release.

UPDATE: Please use v2.6.1 for hotfixes related to unix sockets, encode, and caddy file-server.

Watch the livestream

Special dedication

This release is dedicated to the late Peter Eckersley, who passed away September 2, 2022. Peter is one of the brilliant minds behind Let's Encrypt; his work has benefited billions of people. I met Peter at the Let's Encrypt launch party in a little bar in San Francisco in 2015 and have never forgotten that occasion. He later co-authored a published research paper called Let’s Encrypt: An Automated Certificate Authority to Encrypt the Entire Web, which highly espoused Caddy's ACME integration: "We hope to see other popular server software follow Caddy’s lead."

We look forward to when other servers do that, and we hope to honor Peter's work and influence which will live on through his memory and the encrypted Web he made possible.


Sponsors

ZeroSSL remains Caddy's executive sponsor.

We were thrilled to welcome Stripe recently as an enterprise sponsor!

Other notable sponsors include AppCove, Dukaan, Suborbital, Tailscale, plus Bubble and GitHub which both made generous one-time donations.

We have many other vital sponsors and donors on which we also rely. Our sponsors come from all over the world and include independent professionals, startups, and small companies -- and they are the absolute best. Thank you for making a more secure Web possible!

Personal note from Matt: Recent life upgrades mean that your sponsorships now sustain a family of 5 so that I can continue to maintain Caddy. Two years ago, I don't think I would have taken this risk because I'd need to find other work to provide for a family. Thank you for coming together as a professional community to make the Caddy project possible!

We strongly recommend that companies who -- or companies whose customers -- use or benefit from Caddy become a sponsor to ensure ongoing maintenance, priority development, private support, and more. Sponsorship tiers can be tailored to your requirements!

Highlights

⚠️ Don't miss deprecations / breaking changes at the bottom. Notably, if you use metrics, you will now need to turn them on.

HTTP/3 is here (#4707)

Caddy now enables RFC 9114-compliant HTTP/3 by default. The experimental_http3 option has graduated and been removed. We've removed another experimental option, allow_h2c, and individual HTTP versions (h1 h2 h2c h3) can now be toggled with the new protocols setting.

Note that HTTP/3 utilizes the QUIC transport, which requires UDP. If your network or firewall configuration only allows TCP, HTTP/3 connections will fail and clients (should) fall back to HTTP/2. For servers with properly-configured UDP networks, HTTP/3 should "just work" for enabled clients.

HTTP/3 clients can connect by reading Caddy's Alt-Svc header to know how to connect to Caddy via UDP. This header is now emitted automatically and by default. Other than that, there are no other changes needed to existing servers, as Caddy opens a separate UDP socket for HTTP/3.

Our HTTP/3 server attempts to mitigate amplification and reflection attacks by requiring address validation when the server is under load. This adds one round-trip for clients, but is only done as a defensive measure when necessary.

Serious thanks to @marten-seemann who builds and maintains the quic-go library we depend on for this. (Go has not announced any plans to officially support or implement HTTP/3.) We expect numerous QUIC and HTTP/3 improvements to come as implementations and best practices mature with more production experience.

Virtual file systems (#4909)

Caddy's file_server module now supports virtual file systems. We've replaced all hard-coded os.Open(), os.Stat(), etc. calls with Go's relatively new io/fs package, and introduced a new Caddy module namespace caddy.fs for implementations of such file systems.

Some examples of what is possible:

  • Serve content from S3 or other blob/cloud storage services
  • Serve dynamically-generated content that "feels" static
  • Embed your site directly into your caddy binary and serve it from memory
  • Serve content directly from an archive file (e.g. .zip or .tar.gz)
  • Load files from a database instead of disk

Basically, instead of serving files from the local disk, you can have Caddy serve the "files" from somewhere or something else. The default is still the local file system.

Note that this feature isn't limited to just Caddy's file_server module. Potentially any module that reads the local disk may benefit from using caddy.fs modules instead.

I wrote a module that lets you embed your site within your caddy binary -- wherever your server goes, your site goes!

We encourage the community to implement and publish new file system modules for Caddy. (From an early tweet there seems to be quite high demand.)

Events (#4912 and #4984)

Not surprisingly, many people prefer Caddy to automate certificates used with other software/services. Until now, there hasn't been a great way to know when Caddy has obtained or renewed a certificate (deferred in part by our opinion that certificate management should be baked into the software using the certificate in the first place). Cron jobs generally work for reloading new certificates into services because certificate expiry is mostly predictable, but now there is a better way with one of our most requested features: events!

We thought about events in general for a long time and discussed questions like, "What makes an event different from a log?" "Are events synchronous?" "Do self-initiated events get emitted before or after their code (are they past-tense or future-tense) -- or both? or neither (asynchronous)?" "What do we like from existing event systems?" "What do we wish event systems did differently?"

While we think we have pretty good answers to these questions now, we won't be sure until we gather more production experience. For this reason, events are implemented as an experimental app module -- not as part of the core. (Remember, Caddy's core currently only loads config and sets up logging/storage.) This means that Caddy's core cannot emit events.1 So even though our event implementation may change, it is likely to be only slight and gradual changes; and we encourage anyone and everyone to start using events as soon as possible and to give us your feedback. We think we have the start of a great event system, but we need you to prove it!

Caddy modules can emit events when interesting things happen. For example, the reverse proxy emits healthy and unhealthy events when backends go up and down. The TLS app emits cert_obtaining, cert_obtained, and cert_failed before and after obtaining a certificate or after the operation failed, respectively; and cert_ocsp_revoked after a certificate is discovered to be revoked by OCSP. There are several more events already, with even more to be added later.

Events can have data associated with them. For example, healthy/unhealthy come with the address of the host; cert_obtained has the domain name, issuer, and storage path. You can access this from config in placeholders, e.g. {event.data.identifier}.

Caddy modules can subscribe to events by specifying the name(s) of events to bind to, and the Caddy module ID(s) or namespace(s) to watch. When an event is emitted, it propagates from the module that emitted it up the provisioning heirarchy. This means that an event emitted by http.handlers.reverse_proxy will fire for http.handlers and http as well, similar to the DOM in HTML/JavaScript.

Event handlers are invoked synchronously. We chose this for several reasons. First, despite how easy Go makes concurrency, there are many subtleties to concurrency in a server. Goroutines may be lightweight, but their operations might not be; and if event goroutines are starting more quickly than they are stopping, we either drop events arbitrarily or run out of memory/CPU. Also, we think one of the qualities that differentiates events from logs is the ability for an event to influence the emitting code's flow: a true "hook" in that sense. Instead of simply observing that something is happening (which is what a log tells you), you can influence its behavior. Maybe you want to run a command before a certificate is obtained to see if it should be obtained. Or maybe you want to change how a TLS handshake is completed on-the-fly. Asynchronous event handlers cannot do this. For simple behavioral changes, synchronous events can be a powerful and useful tool for customizing your server.

The new event app lets you easily configure subscriptions and event handlers. Event handling is modular, so you will need to plug in a module that does what you want: run a command, reload a service, make an HTTP request, or anything else!

Because this feature is experimental and new, we don't yet know how people will be using it, so currently, Caddy does not ship with any event handler plugins. However, we're pretty sure based on feedback over the years that many of you would like to run commands on certain events (one of our top feature requests is to trigger a daemon reload after certificate renewals). So I went ahead and implemented an exec event handler plugin that can run commands. We almost included it in Caddy's standard distribution, but out of an abundance of caution we decided to keep it a separate plugin for now until we learn more about real production use cases from experience.

Here's an example of handling events. In JSON, you configure the events app:

{
	"apps": {
		"events": {
			"subscriptions": [
				{
					"events": ["cert_obtained"],
					"handlers": [
						{
							"handler": "exec",
							"command": "systemctl",
							"args": ["reload", "mydaemon"]
						}
					]
				}
			]
		}
	}
}

or the equivalent Caddyfile global option:

{
	events {
		on cert_obtained exec systemctl reload mydaemon
	}
}

It's that simple! Just make sure you have your event handler modules plugged in.

We hope you will provide feedback, report bugs, and request features related to events.

Smarter path matching and rewriting (#4948)

Is the URI path /a/b/c the same as /a/b%2Fc? What about /a/b//c? Turns out, it depends. What these questions illustrate is a famously frustrating problem, and has largely gone unsolved until now. All existing solutions I investigated in other products were unsatisfactory:

  • Nginx (and Caddy until now) always does path comparisons in unescaped/normalized space. This makes it impossible to route on literal escape sequences unless you double-encode your pattern, which violates specification.
  • Apache outright rejects valid2 HTTP requests containing encoded slashes. This behavior can either be disabled completely (creating a security problem known as unsafe paths) or tweaked to never decode encoded slashes (creating ambiguities when comparing against route patterns).
  • Laravel, like nginx, always decodes slashes, but routing such requests mangles application data that contains slashes.

The process of decoding a URI and collapsing slashes in the path is called normalization. Normalization has to occur for safe, reliable routing (imagine //secret bypassing auth checks configured for /secret), but at the same time, raw paths are sometimes needed to preserve application data (imagine a route /bands/:name which succeeds for /bands/AC&2fDC but fails for the normalized /bands/AC/DC). And it's not just routing; servers like Caddy often rewrite/manipulate paths. Because normalizing URIs creates a Many:1 mapping (there are multiple encoded forms of a single URI), normalizing is inherently lossy: the original input cannot be recovered with certainty, so we can't reconstruct the original or intended URI with complete fidelity.

Other solutions with coarse on-off knobs can't balance both security and application correctness: it seems you have to trade one for the other. The crux of the problem seems to be that the server/framework/router doesn't know which parts of the path are application data and which parts are path components, so it just "plays it safe" and decodes the whole thing.

I think Caddy's solution to this is quite novel. Our solution is to interpret encoded characters and multiple slashes in a path pattern literally as a hint of the developer's intent.

For example, if you write a path matcher /a/b/c, it will still match /a/b/c and /a/b%2Fc. However, if your path matcher is /a/b%2Fc, Caddy will only match /a/b%2Fc. This extends to wildcards with our new "escape-wildcard" feature: /bands/%*/ will match /bands/AC%2fDC but /bands/*/ won't. This works for multiple slashes too. If your path matcher uses //, Caddy will require the request path to contain those slashes literally at that position.

We've also implemented this for prefix and suffix manipulations. For example, if you wanted to strip a prefix of //prefix from //prefix/foo, it will now work, whereas before it wouldn't because it would look at a fully-normalized URI.

Essentially, we use the configured path pattern as a cue for whether to decode/merge a character or leave it raw when normalizing.

This is a complex and subtle change, so please be sure to read the full PR in #4948 and the linked Laravel issue. It's very informative!

HTTP 103 Early Hints (#4882 and #5006)

HTTP Early Hints (RFC 8297) is the effective successor to HTTP/2 Server Push. When 103 is emitted with relevant Link headers, web pages will load faster than normal. 1xx responses are precursors to the final response; clients must be able to support receiving multiple responses to a single request (nearly all modern clients do; and it almost certainly shouldn't break any HTTP/2 clients). Early hints are a great way to speed up page loads where the main content may take a while to generate (a slow DB query, for example) but the subresources can start being loaded right away. In those cases it is often beneficial to send early hints.

Caddy can both originate and proxy 103 responses.

To send early hints from Caddy, simply set the Link headers as the hints, then write the response with a 103 status code:

route /slow-pages/* {
	header  +Link "</style.css>; rel=preload; as=style"
	header  +Link "</script.js>; rel=preload; as=script"
	respond 103
}

Unlike normal responses, after writing HTTP 103, Caddy's middleware chain will continue to execute and invoke the next handlers (for example, reverse_proxy) since 103 is not the final response. Multiple 103s can be sent.

Caddy's reverse proxy also supports HTTP 103 responses, meaning that backends can send early hints and Caddy will proxy them to the client immediately as you'd expect.,

Note that browser support is still limited (only Chrome implements it at this time) and Caddy must be built with Go 1.19 (our builds use the latest Go version; but we still support Go 1.18 for now).

Thank you to @dunglas with API Platform for contributing this feature to both Go and Caddy!

Improved command line interface (#4565 and #4994)

Caddy has always used Go's standard flag package for its CLI, which has served us quite well. However, recent improvements in the Cobra library make it possible for our CLI to gain worthwhile features without incurring a heavy dependency.

The new caddy manpage command generates man pages, and the caddy completion command generates shell completions. Both are installed automatically as part of our official Linux packages, so your next apt upgrade (etc.) should take care of that. Additionally, short options (e.g. -c) are now supported. And if you typo a command, Caddy will helpfully suggest a correction (e.g. caddy adpt will suggest caddy adapt).

Note that long-form flags must now use double-hyphen syntax (e.g. --config) even though the single-hyphen syntax (-config) was previously accepted. The standard library's flag parser treats - and -- the same, but Cobra's does not. Our online documentation has always used -- for flags, so we do not consider this a breaking change, but it's good to be aware of this change if you're used to how Go's parser works.

Very many thanks to @mohammed90 for contributing these features!

New caddy respond command (#4870)

For rapid development needing a local HTTP server, the caddy respond command might be just what you need: hard-coded HTTP responses for one or more servers so that you can effortlessly have a custom HTTP endpoint to test with.

A plain caddy respond command will listen on a random port and reply with HTTP 200. (The port or address is printed to the terminal for you.)

You can set a custom status code like caddy respond 401 or a custom body like caddy respond "Hello world!" -- or both: caddy respond --status 401 "Hello world!"

Or you can pipe in a response body, for example serving a maintenance page:

$ cat maintenance.html | caddy respond --status 503 --header "Content-Type: text/html; charset=utf-8"

You can even spin up multiple servers at once and use basic template features to configure each server with a different response:

$ echo "I'm server {{.N}} on port {{.Port}}" | caddy respond --listen :2000-2004

Server address: [::]:2000
Server address: [::]:2001
Server address: [::]:2002
Server address: [::]:2003
Server address: [::]:2004

$ curl 127.0.0.1:2002
I'm server 2 on port 2002

You can debug HTTP clients easier by enabling access logging with the --access-log flag. The --header flag can be used multiple times to set custom HTTP headers, and --debug enables debug mode for more verbose logging. We hope you find this feature useful!

Multiple dynamic upstream sources (5fb5b81)

In Caddy 2.5(.1) we introduced dynamic upstreams, which allow you to configure the reverse_proxy to get the list of backends on-the-fly during requests. This very popular feature's development was sponsored by Stripe, who we are thrilled to welcome as an enterprise sponsor. Stripe uses Caddy heavily for their internal systems, and for greater redundancy they need to be able to fail over to secondary upstreams if a primary cluster is down.

This is where the new multi dynamic upstreams module comes in. Now you can configure, for example, two SRV lookups for aggregated results:

{
	"handler": "reverse_proxy",
	"dynamic_upstreams": {
		"source": "multi",
		"sources": [
			{
				"source": "srv",
				"name": "primary"
			},
			{
				"source": "srv",
				"name": "secondary"
			}
		]
	}
}

This appends the backends returned from the secondary SRV lookup to the results of the primary SRV lookup (order preserved). To implement failover, simply use the first load balancing policy which chooses the first available upstream.

Configurable shutdown delay (#4906)

A shutdown can now be scheduled for a later time using the shutdown_delay option. This is useful for giving advance notice to health checkers that this server will be closing soon. The shutdown delay happens before the grace period where new connections are no longer accepted and existing ones are gracefully closed. During the shutdown delay, the server operates normally with the exception of the value of two placeholders. During the delay:

  • {http.shutting_down} placeholder equals true.
  • {http.time_until_shutdown} returns the duration that remains until server close.

This allows health check endpoints to announce that they will soon be going down so that this instance can be moved out of the rotation or a replacement instance can be spun up in the meantime. For example:

{
	shutdown_delay 10s
}

example.com {
	handle /health-check {
		@goingDown `{http.shutting_down}`
		respond @goingDown "Bye-bye in {http.time_until_shutdown}" 503
		respond 200
	}
}

By the way, the syntax of that @goingDown named matcher is new in 2.6: if a named matcher consists only of a CEL expression string, the type of matcher can be omitted; i.e. what you see above is equivalent to @goingDown expression "{http.shutting_down} == true".

(A shutdown is defined as a config unload where there is no new config to load, or the new config does not have a server configured at the same address as the current server. In other words, a shutdown of a server means a particular HTTP socket will be closed.)

Speaking of grace periods, config changes no longer block while waiting on servers' grace periods. This means faster, more responsive config reloads; just beware that, depending on the length of your grace period, your reload command or config API request may return before the old servers have completely finished shutting down.

Faster FastCGI transport (#4978)

PHP apps, rejoice! The round-trip between Caddy and php-fpm just got a lot faster. Thanks to contributions by @WeidiDeng, the FastCGI transport has been rewritten to be more efficient.

This is some of the oldest and most unique code in Caddy's code base. When Caddy was rewritten for v2 in 2019, everything was rewritten or refactored... except this, the FastCGI transport. This is the first time this part of the code has been improved since it was first implemented3 in 2014!

During tests, profiling showed the new code spends 86% less CPU time in GC (gcDrain) thanks to significantly fewer allocations. This is largely in part due to pooling buffers, which required a non-trivial refactoring to implement.

CPU profile

A very rough benchmark using php_info() yielded a 25% increase in requests per second. Before the rewrite, Caddy almost always performed worse than nginx even with fastcgi_keep_conn off. Our new code performs competitively with nginx, and in some tests Caddy even outperformed nginx with fastcgi_keep_conn on -- and we have not implemented connection pooling/reuse into the new transport yet.

Because every setup is different, your actual results will vary. In general though, you can expect busy servers to handle PHP faster.

Faster file server (#5022)

In a patch contributed by @flga, we've reduced copying between buffers and even eliminated it altogether in some cases using sendfile(2). This has shown to have a 25-50% performance boost. It's automatic and no configuration is required to benefit. In some tests, Caddy's new defaults are even faster than optimized nginx.

Static files over 512 bytes being served over plaintext HTTP sockets may now be served directly by the Linux kernel, which is much faster than copying the file to user-space.

Static files are faster over HTTPS, too. In addition to sendfile (which we can't4 use over TLS), we now utilize the io.ReaderFrom interface to reuse existing buffers and further reduce copying within user space. Our tests show that this significantly enhances performance even over TLS.

Signed release assets

Thanks to heroic efforts by @mohammed90, our GitHub release assets are now signed and certified. Mohammed wrote an excellent Twitter thread explaining the whole thing better than I can here!

So if you're wondering why the number of assets shot from 28 to 134... that's why.

Other notable enhancements

  • More efficient query matcher. (04a14ee)
  • A new Caddyfile placeholder {cookie.*} grants easy access to cookie values. (#5001)
  • Windows service integration: Caddy can now be controlled with sc.exe. (#4790)
  • Replace net.IP type with leaner netip.Addr type. (#4966)
  • Caddyfile-configurable OCSP check interval with ocsp_interval global option. (#4980)
  • The reverse proxy now supports retry_count as an alternative to try_duration; i.e. try backends up to a fixed number of times, rather than up to a time limit. (#4756)
  • The reverse proxy closes both ends of "hijacked" connections when shutting down or reloading. (#4895)
  • The reverse proxy gracefully closes both ends of websocket connections on shutdown or reload. (#4895)
  • The reverse proxy emits metrics regarding the health of upstreams. (#4935)
  • The reverse-proxy command can accept repeated --to flags and load balance. (#4693)
  • The reverse proxy's HTTP transport now supports distinct read and write timeouts. (#4905)
  • Simpler and more reliable config reloads on Linux with SO_REUSEPORT. (#4705)
  • Templates can access reverse proxy responses if used within handle_response. (#4871)
  • Builds now include git revision information when using go build. (#4931)
  • The file matcher (and try_files) now supports glob patterns. (#4993)
  • Named matchers in the Caddyfile can use CEL expressions without specifying expression first. (#4976)
  • The FastCGI transport can now capture and print stderr output. (#5004)
  • Listeners can be provided by plugins, enabling new network types. (#5002)
  • Caddy can write TLS secrets to a file for debugging purposes. (#4808)
  • Sites declared as http:// in the Caddyfile will no longer be overridden by auto-HTTPS redirects. (#5051)
  • Config reloads no longer block while the prior servers are shutting down. (#5043)

⚠️ Deprecations/breaks

  • Metrics are now opt-in. Due to multiple confirmed reports of non-trivial performance regressions with metrics, we are making them opt-in. (Technically, this is not a breaking change, as Caddy will still function normally and your old configs won't be rejected -- but your metrics will stop being produced unless you enable them.) If you rely on metrics, you can enable them globally in the Caddyfile with global options:
    {
    	servers {
    		metrics
    	}
    }
    
    As with other server-scoped global options, you can selectively customize which servers to enable metrics (e.g. servers :8080). Note that this change is experimental and might be temporary: if we can reduce the performance impact or find a better way to enable and configure metrics, this could change.
  • The signature of caddy.Context.Logger() has changed, but in a backwards-compatible way. Modules use this function to obtain a logger they can use within Caddy; until now, modules had to pass themselves in as an argument. Now, the context can figure out which module to associate the logger with, so the sole parameter has been made variadic. It may be removed in the future. Plugins should update their code to not pass in a pointer to themselves.
  • Basic auth deprecates scrypt because it was seldom used and error-prone; use bcrypt instead (#4720)
  • Several changes to experimental servers global options: removed the protocol sub-option, which has been replaced with the protocols sub-option; strict_sni_host is its own separate sub-option; allow_h2c and experimental_http3 have been removed, as both H2C (h2c) and HTTP/3 (h3) can be toggled in protocols (HTTP/3 is now enabled by default and no longer experimental).

As a reminder, features, parameters, and APIs marked as experimental are subject to change or removal. We strive to keep breaking changes of stable features to a minimum and gracefully deprecate whenever possible with emphasis in release notes, warnings in logs, etc. Most breaking changes are motivated or necessitated by bugs/regressions, security, or wrong/unclear documentation.

Thank you

As usual, a huge thank-you to all our sponsors and those who contributed both code and feedback. We also acknowledge the many people who participated in discussions and helped others on the forum. Thank you!

New Contributors

Full Changelog: v2.5.2...v2.6.0


  1. Compilation fails with an import cycle. If Caddy core uses any feature of Caddy, it must also be in the core or another package not imported by any modules!

  2. The "validity" of such a URI based on spec compliance is debatable. RFC 9110 says, "distinct resources SHOULD NOT be identified by HTTP URIs that are equivalent after normalization."

  3. I didn't know how to write a FastCGI client back then (I'm still too scared to do much with it myself); Go's standard library implements only the responder role, not the web server (client). Fortunately there was a random repository on BitBucket that was forked from a random repository on Google Code written in 2012 that modified the Go std lib's fcgi package. It was rough around the edges, but with a little TLC we got it to do what we needed. The copyright had the name Junqing Tan in it, which we still retain in our source code to this day.

  4. This is possible with kTLS, but the Go standard library doesn't support it and it's a bit tedious to make it work, although @FiloSottile was successful with his spike code.