Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For 4.0: Native AMQP 1.0 #9022

Merged
merged 16 commits into from
Feb 28, 2024
Merged

For 4.0: Native AMQP 1.0 #9022

merged 16 commits into from
Feb 28, 2024

Conversation

ansd
Copy link
Member

@ansd ansd commented Aug 7, 2023

What

Similar to Native MQTT in #5895, this commits implements Native AMQP 1.0.
By "native", we mean do not proxy via AMQP 0.9.1 anymore.

Why

Native AMQP 1.0 comes with the following major benefits:

  1. Similar to Native MQTT, this commit provides better throughput, latency,
    scalability, and resource usage for AMQP 1.0.
    See https://blog.rabbitmq.com/posts/2023/03/native-mqtt for native MQTT improvements.
    See further below for some benchmarks.
  2. Since AMQP 1.0 is not limited anymore by the AMQP 0.9.1 protocol,
    this commit allows implementing more AMQP 1.0 features in the future.
    Some features are already implemented in this commit (see next section).
  3. Simpler, better understandable, and more maintainable code.

Native AMQP 1.0 as implemented in this commit has the
following major benefits compared to AMQP 0.9.1:

  1. Memory and disk alarms will only stop accepting incoming TRANSFER frames.
    New connections can still be created to consume from RabbitMQ to empty queues.
  2. Due to 4. no need anymore for separate connections for publishers and
    consumers as we currently recommend for AMQP 0.9.1. which potentially
    halves the number of physical TCP connections.
  3. When a single connection sends to multiple target queues, a single
    slow target queue won't block the entire connection.
    Publisher can still send data quickly to all other target queues.
  4. A publisher can request whether it wants publisher confirmation on a per-message basis.
    In AMQP 0.9.1 publisher confirms are configured per channel only.
  5. Consumers can change their "prefetch count" dynamically which isn't
    possible in our AMQP 0.9.1 implementation. See How global and consumer-specific QoS updates correspond #10174
  6. AMQP 1.0 is an extensible protocol

This commit also fixes dozens of bugs present in the AMQP 1.0 plugin in
RabbitMQ 3.x - most of which cannot be backported due to the complexity
and limitations of the old 3.x implementation.

This commit contains breaking changes and is therefore targeted for RabbitMQ 4.0.

Implementation details

  1. Breaking change: With Native AMQP, the behaviour of
Convert AMQP 0.9.1 message headers to application properties for an AMQP 1.0 consumer
amqp1_0.convert_amqp091_headers_to_app_props = false | true (default false)
Convert AMQP 1.0 Application Properties to AMQP 0.9.1 headers
amqp1_0.convert_app_props_to_amqp091_headers = false | true (default false)

will break because we always convert according to the message container conversions.
For example, AMQP 0.9.1 x-headers will go into message-annotations instead of application properties.
Also, false won’t be respected since we always convert the headers with message containers.

  1. Remove rabbit_queue_collector

rabbit_queue_collector is responsible for synchronously deleting
exclusive queues. Since the AMQP 1.0 plugin never creates exclusive
queues, rabbit_queue_collector doesn't need to be started in the first
place. This will save 1 Erlang process per AMQP 1.0 connection.

  1. 7 processes per connection + 1 process per session in this commit instead of
    7 processes per connection + 15 processes per session in 3.x
    Supervision hierarchy got re-designed.

  2. Use 1 writer process per AMQP 1.0 connection
    AMQP 0.9.1 uses a separate rabbit_writer Erlang process per AMQP 0.9.1 channel.
    Prior to this commit, AMQP 1.0 used a separate rabbit_amqp1_0_writer process per AMQP 1.0 session.
    Advantage of single writer proc per session (prior to this commit):

  • High parallelism for serialising packets if multiple sessions within
    a connection write heavily at the same time.

This commit uses a single writer process per AMQP 1.0 connection that is
shared across all AMQP 1.0 sessions.
Advantages of single writer proc per connection (this commit):

  • Lower memory usage with hundreds of thousands of AMQP 1.0 sessions
  • Less TCP and IP header overhead given that the single writer process
    can accumulate across all sessions bytes before flushing the socket.

In other words, this commit decides that a reader / writer process pair
per AMQP 1.0 connection is good enough for bi-directional TRANSFER flows.
Having a writer per session is too heavy.
We still ensure high throughput by having separate reader, writer, and
session processes.

  1. Transform rabbit_amqp1_0_writer into gen_server
    Why:
    Prior to this commit, when clicking on the AMQP 1.0 writer process in
    observer, the process crashed.
    Instead of handling all these debug messages of the sys module, it's better
    to implement a gen_server.
    There is no advantage of using a special OTP process over gen_server
    for the AMQP 1.0 writer.
    gen_server also provides cleaner format status output.

How:
Message callbacks return a timeout of 0.
After all messages in the inbox are processed, the timeout message is
handled by flushing any pending bytes.

  1. Remove stats timer from writer
    AMQP 1.0 connections haven't emitted any stats previously.

  2. When there are contiguous queue confirmations in the session process
    mailbox, batch them. When the confirmations are sent to the publisher, a
    single DISPOSITION frame is sent for contiguously confirmed delivery
    IDs.
    This approach should be good enough. However it's sub optimal in
    scenarios where contiguous delivery IDs that need confirmations are rare,
    for example:

  • There are multiple links in the session with different sender
    settlement modes and sender publishes across these links interleaved.
  • sender settlement mode is mixed and sender publishes interleaved settled
    and unsettled TRANSFERs.
  1. Introduce credit API v2
    Why:
    The AMQP 0.9.1 credit extension which is to be removed in 4.0 was poorly
    designed since basic.credit is a synchronous call into the queue process
    blocking the entire AMQP 1.0 session process.

How:
Change the interactions between queue clients and queue server
implementations:

  • Clients only request a credit reply if the FLOW's echo field is set
  • Include all link flow control state held by the queue process into a
    new credit_reply queue event:
    • available after the queue sends any deliveries
    • link-credit after the queue sends any deliveries
    • drain which allows us to combine the old queue events
      send_credit_reply and send_drained into a single new queue event
      credit_reply.
  • Include the consumer tag into the credit_reply queue event such that
    the AMQP 1.0 session process can process any credit replies
    asynchronously.

Link flow control state delivery-count also moves to the queue processes.

The new interactions are hidden behind feature flag credit_api_v2 to
allow for rolling upgrades from 3.13 to 4.0.

  1. Use serial number arithmetic in quorum queues and session process.

  2. Completely bypass the rabbit_limiter module for AMQP 1.0
    flow control. The goal is to eventually remove the rabbit_limiter module
    in 4.0 since AMQP 0.9.1 global QoS will be unsupported in 4.0. This
    commit lifts the AMQP 1.0 link flow control logic out of rabbit_limiter
    into rabbit_queue_consumers.

  3. Fix credit bug for streams:
    AMQP 1.0 settlements shouldn't top up link credit,
    only FLOW frames should top up link credit.

  4. Allow sender settle mode unsettled for streams
    since AMQP 1.0 acknowledgements to streams are no-ops (currently).

  5. Fix AMQP 1.0 client bugs
    Auto renewing credits should not be related to settling TRANSFERs.
    Remove field link_credit_unsettled as it was wrong and confusing.
    Prior to this commit auto renewal did not work when the sender uses
    sender settlement mode settled.

  6. Fix AMQP 1.0 client bugs
    The wrong outdated Link was passed to function auto_flow/2

  7. Use osiris chunk iterator
    Only hold messages of uncompressed sub batches in memory if consumer
    doesn't have sufficient credits.
    Compressed sub batches are skipped for non Stream protocol consumers.

  8. Fix incoming link flow control
    Always use confirms between AMQP 1.0 queue clients and queue servers.
    As already done internally by rabbit_fifo_client and
    rabbit_stream_queue, use confirms for classic queues as well.

  9. Include link handle into correlation when publishing messages to target queues
    such that session process can correlate confirms from target queues to
    incoming links.

  10. Only grant more credits to publishers if publisher hasn't sufficient credits
    anymore and there are not too many unconfirmed messages on the link.

  11. Completely ignore block and unblock queue actions and RabbitMQ credit flow
    between classic queue process and session process.

  12. Link flow control is independent between links.
    A client can refer to a queue or to an exchange with multiple
    dynamically added target queues. Multiple incoming links can also fan
    in to the same queue. However the link topology looks like, this
    commit ensures that each link is only granted more credits if that link
    isn't overloaded.

  13. A connection or a session can send to many different queues.
    In AMQP 0.9.1, a single slow queue will lead to the entire channel, and
    then entire connection being blocked.
    This commit makes sure that a single slow queue from one link won't slow
    down sending on other links.
    For example, having link A sending to a local classic queue and
    link B sending to 5 replica quorum queue, link B will naturally
    grant credits slower than link A. So, despite the quorum queue being
    slower in confirming messages, the same AMQP 1.0 connection and session
    can still pump data very fast into the classic queue.

  14. If cluster wide memory or disk alarm occurs.
    Each session sends a FLOW with incoming-window to 0 to sending client.
    If sending clients don’t obey, force disconnect the client.

If cluster wide memory alarm clears:
Each session resumes with a FLOW defaulting to initial incoming-window.

  1. All operations apart of publishing TRANSFERS to RabbitMQ can continue during cluster wide alarms,
    specifically, attaching consumers and consuming, i.e. emptying queues.
    There is no need for separate AMQP 1.0 connections for publishers and consumers as recommended in our AMQP 0.9.1 implementation.

  2. Flow control summary:

  • If queue becomes bottleneck, that’s solved by slowing down individual sending links (AMQP 1.0 link flow control).
  • If session becomes bottleneck (more unlikely), that’s solved by AMQP 1.0 session flow control.
  • If connection becomes bottleneck, it naturally won’t read fast enough from the socket causing TCP backpressure being applied.
    Nowhere will RabbitMQ internal credit based flow control (i.e. module credit_flow) be used on the incoming AMQP 1.0 message path.
  1. Register AMQP sessions
    Prefer local-only pg over our custom pg_local implementation as
    pg is a better process group implementation than pg_local.
    pg_local was identified as bottleneck in tests where many MQTT clients were disconnected at once.

  2. Start a local-only pg when Rabbit boots:

A scope can be kept local-only by using a scope name that is unique cluster-wide, e.g. the node name:
pg:start_link(node()).
Register AMQP 1.0 connections and sessions with pg.

In future we should remove pg_local and instead use the new local-only
pg for all registered processes such as AMQP 0.9.1 connections and channels.

  1. Requeue messages if link detached
    Although the spec allows to settle delivery IDs on detached links, RabbitMQ does not respect the 'closed'
    field of the DETACH frame and therefore handles every DETACH frame as closed. Since the link is closed,
    we expect every outstanding delivery to be requeued.
    In addition to consumer cancellation, detaching a link therefore causes in flight deliveries to be requeued.
    Note that this behaviour is different from merely consumer cancellation in AMQP 0.9.1:
    "After a consumer is cancelled there will be no future deliveries dispatched to it. Note that there can
    still be "in flight" deliveries dispatched previously. Cancelling a consumer will neither discard nor requeue them."
    [https://www.rabbitmq.com/consumers.html#unsubscribing]
    An AMQP receiver can first drain, and then detach to prevent "in flight" deliveries

  2. Init AMQP session with BEGIN frame
    Similar to how there can't be an MQTT processor without a CONNECT
    frame, there can't be an AMQP session without a BEGIN frame.
    This allows having strict dialyzer types for session flow control
    fields (i.e. not allowing 'undefined').

  3. Move serial_number to AMQP 1.0 common lib
    such that it can be used by both AMQP 1.0 server and client

  4. Fix AMQP client to do serial number arithmetic.

  5. AMQP client: Differentiate between delivery-id and transfer-id for better
    understandability.

  6. Fix link flow control in classic queues
    This commit fixes

java -jar target/perf-test.jar -ad false -f persistent -u cq -c 3000 -C 1000000 -y 0

followed by

./omq -x 0 amqp -T /queue/cq -D 1000000 --amqp-consumer-credits 2

Prior to this commit, (and on RabbitMQ 3.x) the consuming would halt after around
8 - 10,000 messages.

The bug was that in flight messages from classic queue process to
session process were not taken into account when topping up credit to
the classic queue process.
Fixes #2597

The solution to this bug (and a much cleaner design anyway independent of
this bug) is that queues should hold all link flow control state including
the delivery-count.

Hence, when credit API v2 is used the delivery-count will be held by the
classic queue process, quorum queue process, and stream queue client
instead of managing the delivery-count in the session.

  1. The double level crediting between (a) session process and
    rabbit_fifo_client, and (b) rabbit_fifo_client and rabbit_fifo was
    removed. Therefore, instead of managing 3 separate delivery-counts (i. session,
    ii. rabbit_fifo_client, iii. rabbit_fifo), only 1 delivery-count is used
    in rabbit_fifo. This is a big simplification.

  2. This commit fixes quorum queues without bumping the machine version
    nor introducing new rabbit_fifo commands.

Whether credit API v2 is used is solely determined at link attachment time
depending on whether feature flag credit_api_v2 is enabled.

Even when that feature flag will be enabled later on, this link will
keep using credit API v1 until detached (or the node is shut down).

Eventually, after feature flag credit_api_v2 has been enabled and a
subsequent rolling upgrade, all links will use credit API v2.

This approach is safe and simple.

The 2 alternatives to move delivery-count from the session process to the
queue processes would have been:

i. Explicit feature flag credit_api_v2 migration function

  • Can use a gen_server:call and only finish migration once all delivery-counts were migrated.
    Cons:
  • Extra new message format just for migration is required.
  • Risky as migration will fail if a target queue doesn’t reply.

ii. Session always includes DeliveryCountSnd when crediting to the queue:
Cons:

  • 2 delivery counts will be hold simultaneously in session proc and queue proc;
    could be solved by deleting the session proc’s delivery-count for credit-reply
  • What happens if the receiver doesn’t provide credit for a very long time? Is that a problem?
  1. Support stream filtering in AMQP 1.0 (by @acogoluegnes)
    Use the x-stream-filter-value message annotation
    to carry the filter value in a published message.
    Use the rabbitmq:stream-filter and rabbitmq:stream-match-unfiltered
    filters when creating a receiver that wants to filter
    out messages from a stream.

  2. Remove credit extension from AMQP 0.9.1 client

  3. Support maintenance mode closing AMQP 1.0 connections.

  4. Remove AMQP 0.9.1 client dependency from AMQP 1.0 implementation.

  5. Move AMQP 1.0 plugin to the core. AMQP 1.0 is enabled by default.
    The old rabbitmq_amqp1_0 plugin will be kept as a no-op plugin to prevent deployment
    tools from failing that execute:

rabbitmq-plugins enable rabbitmq_amqp1_0
rabbitmq-plugins disable rabbitmq_amqp1_0
  1. Breaking change: Remove CLI command rabbitmqctl list_amqp10_connections.
    Instead, list both AMQP 0.9.1 and AMQP 1.0 connections in list_connections:
rabbitmqctl list_connections protocol
Listing connections ...
protocol
{1, 0}
{0,9,1}

Benchmarks

Throughput & Latency

Setup:

  • Single node Ubuntu 22.04
  • Erlang 26.1.1

Start RabbitMQ:

make run-broker PLUGINS="rabbitmq_management rabbitmq_amqp1_0" FULL=1 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 3"

Predeclare durable classic queue cq1, durable quorum queue qq1, durable stream queue sq1.

Start client:
https://github.com/ssorj/quiver
https://hub.docker.com/r/ssorj/quiver/tags (digest 453a2aceda64)

docker run -it --rm --add-host host.docker.internal:host-gateway ssorj/quiver:latest
bash-5.1# quiver --version
quiver 0.4.0-SNAPSHOT
  1. Classic queue
quiver //host.docker.internal//amq/queue/cq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000

This commit:

Count ............................................. 1,000,000 messages
Duration ............................................... 73.8 seconds
Sender rate .......................................... 13,548 messages/s
Receiver rate ........................................ 13,547 messages/s
End-to-end rate ...................................... 13,547 messages/s

Latencies by percentile:

          0% ........ 0 ms       90.00% ........ 9 ms
         25% ........ 2 ms       99.00% ....... 14 ms
         50% ........ 4 ms       99.90% ....... 17 ms
        100% ....... 26 ms       99.99% ....... 24 ms

RabbitMQ 3.x (main branch as of 30 January 2024):

---------------------- Sender -----------------------  --------------------- Receiver ----------------------  --------
Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Lat [ms]
-----------------------------------------------------  -----------------------------------------------------  --------
     2.1        130,814      65,342        6     73.6       2.1          3,217       1,607        0      8.0       511
     4.1        163,580      16,367        2     74.1       4.1          3,217           0        0      8.0         0
     6.1        229,114      32,767        3     74.1       6.1          3,217           0        0      8.0         0
     8.1        261,880      16,367        2     74.1       8.1         67,874      32,296        8      8.2     7,662
    10.1        294,646      16,367        2     74.1      10.1         67,874           0        0      8.2         0
    12.1        360,180      32,734        3     74.1      12.1         67,874           0        0      8.2         0
    14.1        392,946      16,367        3     74.1      14.1         68,604         365        0      8.2    12,147
    16.1        458,480      32,734        3     74.1      16.1         68,604           0        0      8.2         0
    18.1        491,246      16,367        2     74.1      18.1         68,604           0        0      8.2         0
    20.1        556,780      32,767        4     74.1      20.1         68,604           0        0      8.2         0
    22.1        589,546      16,375        2     74.1      22.1         68,604           0        0      8.2         0
receiver timed out
    24.1        622,312      16,367        2     74.1      24.1         68,604           0        0      8.2         0
quiver:  error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
Traceback (most recent call last):
  File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
    _plano.wait(receiver, check=True)
  File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
    raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/cq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-otujr23y' returned non-zero exit status 1.
  1. Quorum queue:
quiver //host.docker.internal//amq/queue/qq1 --durable --count 1m --duration 10m --body-size 12 --credit 1000

This commit:

Count ............................................. 1,000,000 messages
Duration .............................................. 101.4 seconds
Sender rate ........................................... 9,867 messages/s
Receiver rate ......................................... 9,868 messages/s
End-to-end rate ....................................... 9,865 messages/s

Latencies by percentile:

          0% ....... 11 ms       90.00% ....... 23 ms
         25% ....... 15 ms       99.00% ....... 28 ms
         50% ....... 18 ms       99.90% ....... 33 ms
        100% ....... 49 ms       99.99% ....... 47 ms

RabbitMQ 3.x:

---------------------- Sender -----------------------  --------------------- Receiver ----------------------  --------
Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Time [s]      Count [m]  Rate [m/s]  CPU [%]  RSS [M]  Lat [ms]
-----------------------------------------------------  -----------------------------------------------------  --------
     2.1        130,814      65,342        9     69.9       2.1         18,430       9,206        5      7.6     1,221
     4.1        163,580      16,375        5     70.2       4.1         18,867         218        0      7.6     2,168
     6.1        229,114      32,767        6     70.2       6.1         18,867           0        0      7.6         0
     8.1        294,648      32,734        7     70.2       8.1         18,867           0        0      7.6         0
    10.1        360,182      32,734        6     70.2      10.1         18,867           0        0      7.6         0
    12.1        425,716      32,767        6     70.2      12.1         18,867           0        0      7.6         0
receiver timed out
    14.1        458,482      16,367        5     70.2      14.1         18,867           0        0      7.6         0
quiver:  error: PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
Traceback (most recent call last):
  File "/usr/local/lib/quiver/python/quiver/pair.py", line 144, in run
    _plano.wait(receiver, check=True)
  File "/usr/local/lib/quiver/python/plano/main.py", line 1243, in wait
    raise PlanoProcessError(proc)
plano.main.PlanoProcessError: Command 'quiver-arrow receive //host.docker.internal//amq/queue/qq1 --impl qpid-proton-c --duration 10m --count 1m --rate 0 --body-size 12 --credit 1000 --transaction-size 0 --timeout 10 --durable --output /tmp/quiver-b1gcup43' returned non-zero exit status 1.
  1. Stream:
quiver-arrow send //host.docker.internal//amq/queue/sq1 --durable --count 1m -d 10m --summary --verbose

This commit:

Count ............................................. 1,000,000 messages
Duration ................................................ 8.7 seconds
Message rate ........................................ 115,154 messages/s

RabbitMQ 3.x:

Count ............................................. 1,000,000 messages
Duration ............................................... 21.2 seconds
Message rate ......................................... 47,232 messages/s

Memory usage

Start RabbitMQ:

ERL_MAX_PORTS=3000000 RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 3000000 +S 6" make run-broker PLUGINS="rabbitmq_amqp1_0" FULL=1 RABBITMQ_CONFIG_FILE="rabbitmq.conf"
/bin/cat rabbitmq.conf

tcp_listen_options.sndbuf  = 2048
tcp_listen_options.recbuf  = 2048
vm_memory_high_watermark.relative = 0.95
vm_memory_high_watermark_paging_ratio = 0.95
loopback_users = none

Create 50k connections with 2 sessions per connection, i.e. 100k session in total:

package main

import (
	"context"
	"log"
	"time"

	"github.com/Azure/go-amqp"
)

func main() {
	for i := 0; i < 50000; i++ {
		conn, err := amqp.Dial(context.TODO(), "amqp://nuc", &amqp.ConnOptions{SASLType: amqp.SASLTypeAnonymous()})
		if err != nil {
			log.Fatal("dialing AMQP server:", err)
		}
		_, err = conn.NewSession(context.TODO(), nil)
		if err != nil {
			log.Fatal("creating AMQP session:", err)
		}
		_, err = conn.NewSession(context.TODO(), nil)
		if err != nil {
			log.Fatal("creating AMQP session:", err)
		}
	}
	log.Println("opened all connections")
	time.Sleep(5 * time.Hour)
}

This commit:

erlang:memory().
[{total,4586376480},
 {processes,4025898504},
 {processes_used,4025871040},
 {system,560477976},
 {atom,1048841},
 {atom_used,1042841},
 {binary,233228608},
 {code,21449982},
 {ets,108560464}]

erlang:system_info(process_count).
450289

7 procs per connection + 1 proc per session.
(7 + 2*1) * 50,000 = 450,000 procs

RabbitMQ 3.x:

erlang:memory().
[{total,15168232704},
 {processes,14044779256},
 {processes_used,14044755120},
 {system,1123453448},
 {atom,1057033},
 {atom_used,1052587},
 {binary,236381264},
 {code,21790238},
 {ets,391423744}]

erlang:system_info(process_count).
1850309

7 procs per connection + 15 per session
(7 + 2*15) * 50,000 = 1,850,000 procs

50k connections + 100k session require
with this commit: 4.5 GB
in RabbitMQ 3.x: 15 GB

Future work

Ideally until 4.0:

  1. More efficient parser and serializer
  2. TODO in mc_amqp: Do not store the parsed message on disk.
  3. Implement both AMQP HTTP extension and AMQP management extension to allow AMQP
    clients to create RabbitMQ objects (queues, exchanges, ...): Enable AMQP 1.0 clients to manage topologies #10559

@mergify mergify bot added the bazel label Aug 7, 2023
@ansd ansd force-pushed the native-amqp branch 9 times, most recently from 48e2119 to adc4f44 Compare August 8, 2023 12:13
@mergify mergify bot added the make label Aug 24, 2023
@ansd ansd force-pushed the native-amqp branch 2 times, most recently from ba51939 to 84a84c7 Compare August 25, 2023 14:54
@michaelklishin michaelklishin added this to the 4.0.0 milestone Sep 4, 2023
@ansd ansd changed the title PoC: Native AMQP 1.0 Native AMQP 1.0 Sep 7, 2023
@ansd ansd force-pushed the native-amqp branch 5 times, most recently from 1dc59f6 to 4eb68f4 Compare September 12, 2023 08:40
@ansd ansd force-pushed the native-amqp branch 6 times, most recently from 7ae5000 to 25e94d3 Compare September 17, 2023 14:55
ansd added 6 commits February 23, 2024 08:53
```
./omq amqp -t /queue --amqp-subject foo -C 1
```
crashed with
```
reason: {badarg,
            [{erlang,list_to_binary,
                 [undefined],
             {rabbit_amqp_session,ensure_terminus,5,
                 [{file,"rabbit_amqp_session.erl"},{line,2046}]},
             {rabbit_amqp_session,ensure_target,3,
                 [{file,"rabbit_amqp_session.erl"},{line,1705}]},
             {rabbit_amqp_session,handle_control,2,
                 [{file,"rabbit_amqp_session.erl"},{line,715}]},
             {rabbit_amqp_session,handle_cast,2,
                 [{file,"rabbit_amqp_session.erl"},{line,331}]},
             {gen_server,try_handle_cast,3,
                 [{file,"gen_server.erl"},{line,1121}]},
             {gen_server,handle_msg,6,
                 [{file,"gen_server.erl"},{line,1183}]},
             {proc_lib,init_p_do_apply,3,
                 [{file,"proc_lib.erl"},{line,241}]}]}
```
"In the event that the receiving link endpoint has not yet seen the
initial attach frame from the sender this field MUST NOT be set."
[2.7.4]

Since we (the server / the receiving link endpoint), have already seen
the initial attach frame from the sender, set the delivery-count.
to be set in both target address and subject.
What?
For credit API v1, increase the outgoing delivery-count as soon as the
message is scheduled for delivery, that is before the message is queued
in the session's outgoing_pending queue.

Why?
1. More correct for credit API v1 in case a FLOW is received
   for an outgoing link topping up credit while an outgoing transfer on
   the same link is queued in outgoing_pending. For the server's credit
   calculation to be correct, it doesn't matter whether the outgoing
   in-flight message travels through the network, is queued in TCP
   buffers, processed by the writer, or just queued in the session's
   outgoing_pending queue.
2. Higher performance as no map update is performed for credit API v2
   in send_pending()
3. Simplifies code
What?

To not risk any regressions, keep the behaviour of RabbitMQ 3.x
where channel processes and connection helper processes such as
rabbit_queue_collector and rabbit_heartbeat are terminated after
rabbit_reader process.

For example, when RabbitMQ terminates with SIGTERM, we want
exclusive queues being deleted synchronously (as in 3.x).

Prior to this commit:
1. java -jar target/perf-test.jar -x 0 -y 1
2. ./sbin/rabbitmqctl stop_app
resulted in the following crash:
```
crasher:
  initial call: rabbit_reader:init/2
  pid: <0.2389.0>
  registered_name: []
  exception exit: {noproc,
                      {gen_server,call,[<0.2391.0>,delete_all,infinity]}}
    in function  gen_server:call/3 (gen_server.erl, line 419)
    in call from rabbit_reader:close_connection/1 (rabbit_reader.erl, line 683)
    in call from rabbit_reader:send_error_on_channel0_and_close/4 (rabbit_reader.erl, line 1668)
    in call from rabbit_reader:handle_dependent_exit/3 (rabbit_reader.erl, line 710)
    in call from rabbit_reader:mainloop/4 (rabbit_reader.erl, line 530)
    in call from rabbit_reader:run/1 (rabbit_reader.erl, line 452)
    in call from rabbit_reader:start_connection/4 (rabbit_reader.erl, line 351)
```
because rabbit_queue_collector was terminated before rabbit_reader.
This commit fixes this crash.

How?

Any Erlang supervisor including the rabbit_connection_sup supervisor
terminates its children in the opposite of the start order.
Since we want channel and queue collector processes - children of
rabbit_connection_helper_sup - be terminated after the
reader process, we must start rabbit_connection_helper_sup before the
reader process.

Since rabbit_connection_sup - the ranch_protocol implementation - does
not know yet whether it will supervise an AMQP 0.9.1 or AMQP 1.0
connection, it creates rabbit_connection_helper_sup for each AMQP protocol
version removing the superfluous one as soon as the protocol version negotation is
completed. Spawning and deleting this addition process has a negligible
effect on performance.

The whole problem is that the rabbit_connection_helper_sup differs in
its supervisor flags for AMQP 0.9.1 and AMQP 1.0 when it is started
because for Native AMQP 1.0 in 4.0 we remove the unnecessary
rabbit_amqp1_0_session_sup_sup supervisor level.
Therefore, we achieve our goal:
* in Native AMQP 1.0, 1 additional Erlang process is created per session
* in AMQP 1.0 in 3.x, 15 additional Erlang processes are created per session
What?

Protect receiving application from being overloaded with new messages
while still processing existing messages if the auto credit renewal
feature of the Erlang AMQP 1.0 client library is used.

This feature can therefore be thought of as a prefetch window equivalent
in AMQP 0.9.1 or MQTT 5.0 property Receive Maximum.

How?

The credit auto renewal feature in RabbitMQ 3.x was wrongly implemented.
This commit takes the same approach as done in the server:
The incoming_unsettled map is hold in the link instead of in the session
to accurately and quickly determine the number of unsettled messages for
a receiving link.

The amqp10_client lib will grant more credits to the sender when the sum
of remaining link credits and number of unsettled deliveries falls below
the threshold RenewWhenBelow.

This avoids maintaning additional state like the `link_credit_unsettled`
or an alternative delivery_count_settled sequence number which is more
complex to implement correctly.

This commit breaks the amqp10_client_session:disposition/6 API:
This commit forces the client application to only range settle for a
given link, i.e. not across multiple links on a given session at once.
The latter is allowed according to the AMQP spec.
Remove bean `logQuery` as described in
spring-attic/spring-native#1708 (comment)
to avoid ActiveMQ start up failure with reason
```
java.lang.ClassNotFoundException: io.fabric8.insight.log.log4j.Log4jLogQuery
```
ansd and others added 5 commits February 23, 2024 12:47
Fixes test
```
./mvnw test -Dtest=ClientTest#largeMessageWithStreamSender
```
in client rabbitmq-java-model
As this disables the internal credit flow accounting that isn't
used anyway.
This aligns flow control behaviour for AMQP across all queue types.

All flow is controlled by the AMQP credit flow gestures rather than
relying on additional, parallel mechanism.

This will allow us to adjust the flow control approach for all
queue types and expect consistent results.
As these are the most likely to potentially run a backlog of
rabbit messages in their mailboxes and we do not want to include
these in gc runs unnecessarily.
Copy link
Contributor

@kjnilsson kjnilsson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

A couple of suggestions

deps/rabbit/src/rabbit_amqp_session.erl Outdated Show resolved Hide resolved
deps/rabbit/src/rabbit_amqp_session.erl Outdated Show resolved Hide resolved
deps/rabbit/src/rabbit_amqp_session.erl Outdated Show resolved Hide resolved
@michaelklishin michaelklishin changed the title Post 3.13: Native AMQP 1.0 For 4.0: Native AMQP 1.0 Feb 27, 2024
@ansd ansd merged commit afd28ba into main Feb 28, 2024
21 checks passed
@ansd ansd deleted the native-amqp branch February 28, 2024 13:15
ansd added a commit that referenced this pull request Mar 25, 2024
 ## What?

* Allow AMQP 1.0 clients to dynamically create and delete RabbitMQ
  topologies (exchanges, queues, bindings).
* Provide an Erlang AMQP 1.0 client that manages topologies.

 ## Why?

Today, RabbitMQ topologies can be created via:
* [Management HTTP API](https://www.rabbitmq.com/docs/management#http-api)
  (including Management UI and
  [messaging-topology-operator](https://github.com/rabbitmq/messaging-topology-operator))
* [Definition Import](https://www.rabbitmq.com/docs/definitions#import)
* AMQP 0.9.1 clients

Up to RabbitMQ 3.13 the RabbitMQ AMQP 1.0 plugin auto creates queues
and bindings depending on the terminus [address
format](https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing).

Such implicit creation of topologies is limiting and obscure.
For some address formats, queues will be created, but not deleted.

Some of RabbitMQ's success is due to its flexible routing topologies
that AMQP 0.9.1 clients can create and delete dynamically.

This commit allows dynamic management of topologies for AMQP 1.0 clients.
This commit builds on top of Native AMQP 1.0 (PR #9022) and will be
available in RabbitMQ 4.0.

 ## How?

This commits adds the following management operations for AMQP 1.0 clients:
* declare queue
* delete queue
* purge queue
* bind queue to exchange
* unbind queue from exchange
* declare exchange
* delete exchange
* bind exchange to exchange
* unbind exchange from exchange

Hence, at least the AMQP 0.9.1 management operations are supported for
AMQP 1.0 clients.

In addition the operation
* get queue

is provided which - similar to `declare queue` - returns queue
information including the current leader and replicas.
This allows clients to publish or consume locally on the node that hosts
the queue.

Compared to AMQP 0.9.1 whose commands and command fields are fixed, the
new AMQP Management API is extensible: New operations and new fields can
easily be added in the future.

There are different design options how management operations could be
supported for AMQP 1.0 clients:
1. Use a special exchange type as done in https://github.com/rabbitmq/rabbitmq-management-exchange
  This has the advantage that any protocol client (e.g. also STOMP clients) could
  dynamically manage topologies. However, a special exchange type is the wrong abstraction.
2. Clients could send "special" messages with special headers that the broker interprets.

This commit decided for a variation of the 2nd option using a more
standardized way by re-using a subest of the following latest AMQP 1.0 extension
specifications:
* [AMQP Request-Response Messaging with Link Pairing Version 1.0 - Committee Specification 01](https://docs.oasis-open.org/amqp/linkpair/v1.0/cs01/linkpair-v1.0-cs01.html) (February 2021)
* [HTTP Semantics and Content over AMQP Version 1.0 - Working Draft 06](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65571) (July 2019)
* [AMQP Management Version 1.0 - Working Draft 16](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65575) (July 2019)

An important goal is to keep the interaction between AMQP 1.0 client and RabbitMQ
simple to increase usage, development and adoptability of future RabbitMQ AMQP 1.0
client library wrappers.

The AMQP 1.0 client has to create a link pair to the special `/management` node.
This allows the client to send and receive from the management node.
Similar to AMQP 0.9.1, there is no need for a reply queue since the reply
will be sent directly to the client.

Requests and responses are modelled via HTTP, but sent via AMQP using
the `HTTP Semantics and Content over AMQP` extension (henceforth `HTTP
over AMQP` extension).

This commit tries to follow the `HTTP over AMQP` extension as much as
possible but deviates where this draft spec doesn't make sense.

The projected mode §4.1 is used as opposed to tunneled mode §4.2.
A named relay `/management` is used (§6.3) where the message field `to` is the URL.

Deviations are
* §3.1 mandates that URIs are not encoded in an AMQP message.
  However, we percent encode URIs in the AMQP message. Otherwise there
  is for example no way to distinguish a `/` in a queue name from the
  URI path separator `/`.
* §4.1.4 mandates a data section. This commit uses an amqp-value section
  as it's a better fit given that the content is AMQP encoded data.

Using an HTTP API allows for a common well understood interface and future extensibility.
Instead of re-using the current RabbitMQ HTTP API, this commit uses a
new HTTP API (let's call it v2) which could be used as a future API for
plain HTTP clients.

 ### HTTP API v1

The current HTTP API (let's call it v1) is **not** used since v1
comes with a couple of weaknesses:

1. Deep level of nesting becomes confusing and difficult to manage.

Examples of deep nesting in v1:
```
/api/bindings/vhost/e/source/e/destination/props
/api/bindings/vhost/e/exchange/q/queue/props
```

2. Redundant endpoints returning the same resources

v1 has 9 endpoints to list binding(s):
```
/api/exchanges/vhost/name/bindings/source
/api/exchanges/vhost/name/bindings/destination
/api/queues/vhost/name/bindings
/api/bindings
/api/bindings/vhost
/api/bindings/vhost/e/exchange/q/queue
/api/bindings/vhost/e/exchange/q/queue/props
/api/bindings/vhost/e/source/e/destination
/api/bindings/vhost/e/source/e/destination/props
```

3. Verbs in path names
Path names should be nouns instead.
v1 contains verbs:
```
/api/queues/vhost/name/get
/api/exchanges/vhost/name/publish
```

 ### AMQP Management extension

Only few aspects of the AMQP Management extension are used.

The central idea of the AMQP management spec is **dynamic discovery** such that broker independent AMQP 1.0
clients can discover objects, types, operations, and HTTP endpoints of specific brokers.
In fact, clients are only conformant if:
> All request addresses are dynamically discovered starting from the discovery document.
> A requesting container MUST NOT use fixed assumptions about the addressing structure of the management API.

While this is a nice and powerful idea, no AMQP 1.0 client and no AMQP 1.0 server implement the
latest AMQP 1.0 management spec from 2019, partly presumably due to its complexity.
Therefore, the idea of such dynamic discovery has failed to be implemented in practice.

The AMQP management spec mandates that the management endpoint returns a discovery document containing
broker specific collections, types, configuration, and operations including their endpoints.

The API endpoints of the AMQP management spec are therefore all designed around dynamic discovery.

For example, to create either a queue or an exchange, the client has to
```
POST /$management/entities
```
which shows that the entities collection acts as a generic factory, see section 2.2.
The server will then create the resource and reply with a location header containing a URI pointing to the resource.
For RabbitMQ, we don’t need such a generic factory to create queues or exchanges.

To list bindings for a queue Q1, the spec suggests
```
GET /$management/Queues/Q1/$management/entities
```
which again shows the generic entities endpoint as well as a `$management` endpoint under Q1 to
allow a queue to return a discovery document.
For RabbitMQ, we don’t need such generic endpoints and discovery documents.

Given we aim for our own thin RabbitMQ AMQP 1.0 client wrapper libraries which expose
the RabbitMQ model to the developer, we can directly use fixed HTTP endpoint assumptions
in our RabbitMQ specific libraries.

This is by far simpler than using the dynamic endpoints of the management spec.
Simplicity leads to higher adoption and enables more developers to write RabbitMQ AMQP 1.0 client
library wrappers.

The AMQP Management extension also suffers from deep level of nesting in paths
Examples:
```
/$management/Queues/Q1/$management/entities
/$management/Queues/Q1/Bindings/Binding1
```
as well as verbs in path names: Section 7.1.4 suggests using verbs in path names,
for example “purge”, due to the dynamic operations discovery document.

 ### HTTP API v2

This commit introduces a new HTTP API v2 following best practices.
It could serve as a future API for plain HTTP clients.

This commit and RabbitMQ 4.0 will only implement a minimal set of
HTTP API v2 endpoints and only for HTTP over AMQP.
In other words, the existing HTTP API v1 Cowboy handlers will continue to be
used for all plain HTTP requests in RabbitMQ 4.0 and will remain untouched for RabbitMQ 4.0.
Over time, after 4.0 shipped, we could ship a pure HTTP API implementation for HTTP API v2.
Hence, the new HTTP API v2 endpoints for HTTP over AMQP should be designed such that they
can be re-used in the future for a pure HTTP implementation.

The minimal set of endpoints for RabbitMQ 4.0 are:

``
GET / PUT / DELETE
/vhosts/:vhost/queues/:queue
```
read, create, delete a queue

```
DELETE
/vhosts/:vhost/queues/:queue/messages
```
purges a queue

```
GET / DELETE
/vhosts/:vhost/bindings/:binding
```
read, delete bindings
where `:binding` is a binding ID of the following path segment:
```
src=e1;dstq=q2;key=my-key;args=
```
Binding arguments `args` has an empty value by default, i.e. there are no binding arguments.
If the binding includes binding arguments, `args` will be an Erlang portable term hash
provided by the server similar to what’s provided in HTTP API v1 today.
Alternatively, we could use an arguments scheme of:
```
args=k1,utf8,v1&k2,uint,3
```
However, such a scheme leads to long URIs when there are many binding arguments.
Note that it’s perfectly fine for URI producing applications to include URI
reserved characters `=` / `;` / `,` / `$` in a path segment.

To create a binding, the client therefore needs to POST to a bindings factory URI:
```
POST
/vhosts/:vhost/bindings
```

To list all bindings between a source exchange e1 and destination exchange e2 with binding key k1:
```
GET
/vhosts/:vhost/bindings?src=e1&dste=e2&key=k1
```

This endpoint will be called by the RabbitMQ AMQP 1.0 client library to unbind a
binding with non-empty binding arguments to get the binding ID before invoking a
```
DELETE
/vhosts/:vhost/bindings/:binding
```

In future, after RabbitMQ 4.0 shipped, new API endpoints could be added.
The following is up for discussion and is only meant to show the clean and simple design of HTTP API v2.

Bindings endpoint can be queried as follows:

to list all bindings for a given source exchange e1:
```
GET
/vhosts/:vhost/bindings?src=e1
```

to list all bindings for a given destination queue q1:
```
GET
/vhosts/:vhost/bindings?dstq=q1
```

to list all bindings between a source exchange e1 and destination queue q1:
```
GET
/vhosts/:vhost/bindings?src=e1&dstq=q1
```

multiple bindings between source exchange e1 and destination queue q1 could be deleted at once as follows:
```
DELETE /vhosts/:vhost/bindings?src=e1&dstq=q1
```

GET could be supported globally across all vhosts:
```
/exchanges
/queues
/bindings
```

Publish a message:
```
POST
/vhosts/:vhost/queues/:queue/messages
```

Consume or peek a message (depending on query parameters):
```
GET
/vhosts/:vhost/queues/:queue/messages
```

Note that the AMQP 1.0 client omits the `/vhost/:vhost` path prefix.
Since an AMQP connection belongs to a single vhost, there is no need to
additionally include the vhost in every HTTP request.

Pros of HTTP API v2:

1. Low level of nesting

Queues, exchanges, bindings are top level entities directly under vhosts.
Although the HTTP API doesn’t have to reflect how resources are stored in the database,
v2 does nicely reflect the Khepri tree structure.

2. Nouns instead of verbs
HTTP API v2 is very simple to read and understand as shown by
```
POST    /vhosts/:vhost/queues/:queue/messages	to post messages, i.e. publish to a queue.
GET     /vhosts/:vhost/queues/:queue/messages	to get messages, i.e. consume or peek from a queue.
DELETE  /vhosts/:vhost/queues/:queue/messages	to delete messages, i.e. purge a queue.
```

A separate new HTTP API v2 allows us to ship only handlers for HTTP over AMQP for RabbitMQ 4.0
and therefore move faster while still keeping the option on the table to re-use the new v2 API
for pure HTTP in the future.
In contrast, re-using the HTTP API v1 for HTTP over AMQP is possible, but dirty because separate handlers
(HTTP over AMQP and pure HTTP) replying differently will be needed for the same v1 endpoints.
ansd added a commit that referenced this pull request Mar 28, 2024
 ## What?

* Allow AMQP 1.0 clients to dynamically create and delete RabbitMQ
  topologies (exchanges, queues, bindings).
* Provide an Erlang AMQP 1.0 client that manages topologies.

 ## Why?

Today, RabbitMQ topologies can be created via:
* [Management HTTP API](https://www.rabbitmq.com/docs/management#http-api)
  (including Management UI and
  [messaging-topology-operator](https://github.com/rabbitmq/messaging-topology-operator))
* [Definition Import](https://www.rabbitmq.com/docs/definitions#import)
* AMQP 0.9.1 clients

Up to RabbitMQ 3.13 the RabbitMQ AMQP 1.0 plugin auto creates queues
and bindings depending on the terminus [address
format](https://github.com/rabbitmq/rabbitmq-server/tree/v3.13.x/deps/rabbitmq_amqp1_0#routing-and-addressing).

Such implicit creation of topologies is limiting and obscure.
For some address formats, queues will be created, but not deleted.

Some of RabbitMQ's success is due to its flexible routing topologies
that AMQP 0.9.1 clients can create and delete dynamically.

This commit allows dynamic management of topologies for AMQP 1.0 clients.
This commit builds on top of Native AMQP 1.0 (PR #9022) and will be
available in RabbitMQ 4.0.

 ## How?

This commits adds the following management operations for AMQP 1.0 clients:
* declare queue
* delete queue
* purge queue
* bind queue to exchange
* unbind queue from exchange
* declare exchange
* delete exchange
* bind exchange to exchange
* unbind exchange from exchange

Hence, at least the AMQP 0.9.1 management operations are supported for
AMQP 1.0 clients.

In addition the operation
* get queue

is provided which - similar to `declare queue` - returns queue
information including the current leader and replicas.
This allows clients to publish or consume locally on the node that hosts
the queue.

Compared to AMQP 0.9.1 whose commands and command fields are fixed, the
new AMQP Management API is extensible: New operations and new fields can
easily be added in the future.

There are different design options how management operations could be
supported for AMQP 1.0 clients:
1. Use a special exchange type as done in https://github.com/rabbitmq/rabbitmq-management-exchange
  This has the advantage that any protocol client (e.g. also STOMP clients) could
  dynamically manage topologies. However, a special exchange type is the wrong abstraction.
2. Clients could send "special" messages with special headers that the broker interprets.

This commit decided for a variation of the 2nd option using a more
standardized way by re-using a subest of the following latest AMQP 1.0 extension
specifications:
* [AMQP Request-Response Messaging with Link Pairing Version 1.0 - Committee Specification 01](https://docs.oasis-open.org/amqp/linkpair/v1.0/cs01/linkpair-v1.0-cs01.html) (February 2021)
* [HTTP Semantics and Content over AMQP Version 1.0 - Working Draft 06](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65571) (July 2019)
* [AMQP Management Version 1.0 - Working Draft 16](https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=65575) (July 2019)

An important goal is to keep the interaction between AMQP 1.0 client and RabbitMQ
simple to increase usage, development and adoptability of future RabbitMQ AMQP 1.0
client library wrappers.

The AMQP 1.0 client has to create a link pair to the special `/management` node.
This allows the client to send and receive from the management node.
Similar to AMQP 0.9.1, there is no need for a reply queue since the reply
will be sent directly to the client.

Requests and responses are modelled via HTTP, but sent via AMQP using
the `HTTP Semantics and Content over AMQP` extension (henceforth `HTTP
over AMQP` extension).

This commit tries to follow the `HTTP over AMQP` extension as much as
possible but deviates where this draft spec doesn't make sense.

The projected mode §4.1 is used as opposed to tunneled mode §4.2.
A named relay `/management` is used (§6.3) where the message field `to` is the URL.

Deviations are
* §3.1 mandates that URIs are not encoded in an AMQP message.
  However, we percent encode URIs in the AMQP message. Otherwise there
  is for example no way to distinguish a `/` in a queue name from the
  URI path separator `/`.
* §4.1.4 mandates a data section. This commit uses an amqp-value section
  as it's a better fit given that the content is AMQP encoded data.

Using an HTTP API allows for a common well understood interface and future extensibility.
Instead of re-using the current RabbitMQ HTTP API, this commit uses a
new HTTP API (let's call it v2) which could be used as a future API for
plain HTTP clients.

 ### HTTP API v1

The current HTTP API (let's call it v1) is **not** used since v1
comes with a couple of weaknesses:

1. Deep level of nesting becomes confusing and difficult to manage.

Examples of deep nesting in v1:
```
/api/bindings/vhost/e/source/e/destination/props
/api/bindings/vhost/e/exchange/q/queue/props
```

2. Redundant endpoints returning the same resources

v1 has 9 endpoints to list binding(s):
```
/api/exchanges/vhost/name/bindings/source
/api/exchanges/vhost/name/bindings/destination
/api/queues/vhost/name/bindings
/api/bindings
/api/bindings/vhost
/api/bindings/vhost/e/exchange/q/queue
/api/bindings/vhost/e/exchange/q/queue/props
/api/bindings/vhost/e/source/e/destination
/api/bindings/vhost/e/source/e/destination/props
```

3. Verbs in path names
Path names should be nouns instead.
v1 contains verbs:
```
/api/queues/vhost/name/get
/api/exchanges/vhost/name/publish
```

 ### AMQP Management extension

Only few aspects of the AMQP Management extension are used.

The central idea of the AMQP management spec is **dynamic discovery** such that broker independent AMQP 1.0
clients can discover objects, types, operations, and HTTP endpoints of specific brokers.
In fact, clients are only conformant if:
> All request addresses are dynamically discovered starting from the discovery document.
> A requesting container MUST NOT use fixed assumptions about the addressing structure of the management API.

While this is a nice and powerful idea, no AMQP 1.0 client and no AMQP 1.0 server implement the
latest AMQP 1.0 management spec from 2019, partly presumably due to its complexity.
Therefore, the idea of such dynamic discovery has failed to be implemented in practice.

The AMQP management spec mandates that the management endpoint returns a discovery document containing
broker specific collections, types, configuration, and operations including their endpoints.

The API endpoints of the AMQP management spec are therefore all designed around dynamic discovery.

For example, to create either a queue or an exchange, the client has to
```
POST /$management/entities
```
which shows that the entities collection acts as a generic factory, see section 2.2.
The server will then create the resource and reply with a location header containing a URI pointing to the resource.
For RabbitMQ, we don’t need such a generic factory to create queues or exchanges.

To list bindings for a queue Q1, the spec suggests
```
GET /$management/Queues/Q1/$management/entities
```
which again shows the generic entities endpoint as well as a `$management` endpoint under Q1 to
allow a queue to return a discovery document.
For RabbitMQ, we don’t need such generic endpoints and discovery documents.

Given we aim for our own thin RabbitMQ AMQP 1.0 client wrapper libraries which expose
the RabbitMQ model to the developer, we can directly use fixed HTTP endpoint assumptions
in our RabbitMQ specific libraries.

This is by far simpler than using the dynamic endpoints of the management spec.
Simplicity leads to higher adoption and enables more developers to write RabbitMQ AMQP 1.0 client
library wrappers.

The AMQP Management extension also suffers from deep level of nesting in paths
Examples:
```
/$management/Queues/Q1/$management/entities
/$management/Queues/Q1/Bindings/Binding1
```
as well as verbs in path names: Section 7.1.4 suggests using verbs in path names,
for example “purge”, due to the dynamic operations discovery document.

 ### HTTP API v2

This commit introduces a new HTTP API v2 following best practices.
It could serve as a future API for plain HTTP clients.

This commit and RabbitMQ 4.0 will only implement a minimal set of
HTTP API v2 endpoints and only for HTTP over AMQP.
In other words, the existing HTTP API v1 Cowboy handlers will continue to be
used for all plain HTTP requests in RabbitMQ 4.0 and will remain untouched for RabbitMQ 4.0.
Over time, after 4.0 shipped, we could ship a pure HTTP API implementation for HTTP API v2.
Hence, the new HTTP API v2 endpoints for HTTP over AMQP should be designed such that they
can be re-used in the future for a pure HTTP implementation.

The minimal set of endpoints for RabbitMQ 4.0 are:

``
GET / PUT / DELETE
/vhosts/:vhost/queues/:queue
```
read, create, delete a queue

```
DELETE
/vhosts/:vhost/queues/:queue/messages
```
purges a queue

```
GET / DELETE
/vhosts/:vhost/bindings/:binding
```
read, delete bindings
where `:binding` is a binding ID of the following path segment:
```
src=e1;dstq=q2;key=my-key;args=
```
Binding arguments `args` has an empty value by default, i.e. there are no binding arguments.
If the binding includes binding arguments, `args` will be an Erlang portable term hash
provided by the server similar to what’s provided in HTTP API v1 today.
Alternatively, we could use an arguments scheme of:
```
args=k1,utf8,v1&k2,uint,3
```
However, such a scheme leads to long URIs when there are many binding arguments.
Note that it’s perfectly fine for URI producing applications to include URI
reserved characters `=` / `;` / `,` / `$` in a path segment.

To create a binding, the client therefore needs to POST to a bindings factory URI:
```
POST
/vhosts/:vhost/bindings
```

To list all bindings between a source exchange e1 and destination exchange e2 with binding key k1:
```
GET
/vhosts/:vhost/bindings?src=e1&dste=e2&key=k1
```

This endpoint will be called by the RabbitMQ AMQP 1.0 client library to unbind a
binding with non-empty binding arguments to get the binding ID before invoking a
```
DELETE
/vhosts/:vhost/bindings/:binding
```

In future, after RabbitMQ 4.0 shipped, new API endpoints could be added.
The following is up for discussion and is only meant to show the clean and simple design of HTTP API v2.

Bindings endpoint can be queried as follows:

to list all bindings for a given source exchange e1:
```
GET
/vhosts/:vhost/bindings?src=e1
```

to list all bindings for a given destination queue q1:
```
GET
/vhosts/:vhost/bindings?dstq=q1
```

to list all bindings between a source exchange e1 and destination queue q1:
```
GET
/vhosts/:vhost/bindings?src=e1&dstq=q1
```

multiple bindings between source exchange e1 and destination queue q1 could be deleted at once as follows:
```
DELETE /vhosts/:vhost/bindings?src=e1&dstq=q1
```

GET could be supported globally across all vhosts:
```
/exchanges
/queues
/bindings
```

Publish a message:
```
POST
/vhosts/:vhost/queues/:queue/messages
```

Consume or peek a message (depending on query parameters):
```
GET
/vhosts/:vhost/queues/:queue/messages
```

Note that the AMQP 1.0 client omits the `/vhost/:vhost` path prefix.
Since an AMQP connection belongs to a single vhost, there is no need to
additionally include the vhost in every HTTP request.

Pros of HTTP API v2:

1. Low level of nesting

Queues, exchanges, bindings are top level entities directly under vhosts.
Although the HTTP API doesn’t have to reflect how resources are stored in the database,
v2 does nicely reflect the Khepri tree structure.

2. Nouns instead of verbs
HTTP API v2 is very simple to read and understand as shown by
```
POST    /vhosts/:vhost/queues/:queue/messages	to post messages, i.e. publish to a queue.
GET     /vhosts/:vhost/queues/:queue/messages	to get messages, i.e. consume or peek from a queue.
DELETE  /vhosts/:vhost/queues/:queue/messages	to delete messages, i.e. purge a queue.
```

A separate new HTTP API v2 allows us to ship only handlers for HTTP over AMQP for RabbitMQ 4.0
and therefore move faster while still keeping the option on the table to re-use the new v2 API
for pure HTTP in the future.
In contrast, re-using the HTTP API v1 for HTTP over AMQP is possible, but dirty because separate handlers
(HTTP over AMQP and pure HTTP) replying differently will be needed for the same v1 endpoints.
ansd added a commit that referenced this pull request Jun 25, 2024
 ## What?

This commit fixes issues that were present only on `main`
branch and were introduced by #9022.

1. Classic queues (specifically `rabbit_queue_consumers:subtract_acks/3`)
   expect message IDs to be (n)acked in the order as they were delivered
   to the channel / session proc.
   Hence, the `lists:usort(MsgIds0)` in `rabbit_classic_queue:settle/5`
   was wrong causing not all messages to be acked adding a regression
   to also AMQP 0.9.1.
2. The order in which the session proc requeues or rejects multiple
   message IDs at once is important. For example, if the client sends a
   DISPOSITION with first=3 and last=5, the message IDs corresponding to
   delivery IDs 3,4,5 must be requeued or rejected in exactly that
   order.
   For example, quorum queues use this order of message IDs in
   https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_fifo.erl#L226-L234
   to dead letter in that order.

 ## How?

The session proc will settle (internal queue) message IDs in ascending
(AMQP) delivery ID order, i.e. in the order messages were sent to the
client and in the order messages were settled by the client.

This commit chooses to keep the session's outgoing_unsettled_map map
data structure.

An alternative would have been to use a queue or lqueue for the
outgoing_unsettled_map as done in
* https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_channel.erl#L135
* https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_queue_consumers.erl#L43

Whether a queue (as done by `rabbit_channel`) or a map (as done by
`rabbit_amqp_session`) performs better depends on the pattern how
clients ack messages.

A queue will likely perform good enough because usually the oldest
delivered messages will be acked first.
However, given that there can be many different consumers on an AQMP
0.9.1 channel or AMQP 1.0 session, this commit favours a map because
it will likely generate less garbage and is very efficient when for
example a new single message (or few new messages) gets acked while
many (older) messages are still checked out by the session (but by
possibly different AMQP 1.0 receivers).
ansd added a commit that referenced this pull request Jun 25, 2024
 ## What?

This commit fixes issues that were present only on `main`
branch and were introduced by #9022.

1. Classic queues (specifically `rabbit_queue_consumers:subtract_acks/3`)
   expect message IDs to be (n)acked in the order as they were delivered
   to the channel / session proc.
   Hence, the `lists:usort(MsgIds0)` in `rabbit_classic_queue:settle/5`
   was wrong causing not all messages to be acked adding a regression
   to also AMQP 0.9.1.
2. The order in which the session proc requeues or rejects multiple
   message IDs at once is important. For example, if the client sends a
   DISPOSITION with first=3 and last=5, the message IDs corresponding to
   delivery IDs 3,4,5 must be requeued or rejected in exactly that
   order.
   For example, quorum queues use this order of message IDs in
   https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_fifo.erl#L226-L234
   to dead letter in that order.

 ## How?

The session proc will settle (internal) message IDs to queues in ascending
(AMQP) delivery ID order, i.e. in the order messages were sent to the
client and in the order messages were settled by the client.

This commit chooses to keep the session's outgoing_unsettled_map map
data structure.

An alternative would have been to use a queue or lqueue for the
outgoing_unsettled_map as done in
* https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_channel.erl#L135
* https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_queue_consumers.erl#L43

Whether a queue (as done by `rabbit_channel`) or a map (as done by
`rabbit_amqp_session`) performs better depends on the pattern how
clients ack messages.

A queue will likely perform good enough because usually the oldest
delivered messages will be acked first.
However, given that there can be many different consumers on an AQMP
0.9.1 channel or AMQP 1.0 session, this commit favours a map because
it will likely generate less garbage and is very efficient when for
example a single new message (or few new messages) gets acked while
many (older) messages are still checked out by the session (but by
possibly different AMQP 1.0 receivers).
ansd added a commit that referenced this pull request Jun 26, 2024
 ## What?

This commit fixes issues that were present only on `main`
branch and were introduced by #9022.

1. Classic queues (specifically `rabbit_queue_consumers:subtract_acks/3`)
   expect message IDs to be (n)acked in the order as they were delivered
   to the channel / session proc.
   Hence, the `lists:usort(MsgIds0)` in `rabbit_classic_queue:settle/5`
   was wrong causing not all messages to be acked adding a regression
   to also AMQP 0.9.1.
2. The order in which the session proc requeues or rejects multiple
   message IDs at once is important. For example, if the client sends a
   DISPOSITION with first=3 and last=5, the message IDs corresponding to
   delivery IDs 3,4,5 must be requeued or rejected in exactly that
   order.
   For example, quorum queues use this order of message IDs in
   https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_fifo.erl#L226-L234
   to dead letter in that order.

 ## How?

The session proc will settle (internal) message IDs to queues in ascending
(AMQP) delivery ID order, i.e. in the order messages were sent to the
client and in the order messages were settled by the client.

This commit chooses to keep the session's outgoing_unsettled_map map
data structure.

An alternative would have been to use a queue or lqueue for the
outgoing_unsettled_map as done in
* https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_channel.erl#L135
* https://github.com/rabbitmq/rabbitmq-server/blob/34d3f943742bdcf7d34859edff8d45f35e4007d4/deps/rabbit/src/rabbit_queue_consumers.erl#L43

Whether a queue (as done by `rabbit_channel`) or a map (as done by
`rabbit_amqp_session`) performs better depends on the pattern how
clients ack messages.

A queue will likely perform good enough because usually the oldest
delivered messages will be acked first.
However, given that there can be many different consumers on an AQMP
0.9.1 channel or AMQP 1.0 session, this commit favours a map because
it will likely generate less garbage and is very efficient when for
example a single new message (or few new messages) gets acked while
many (older) messages are still checked out by the session (but by
possibly different AMQP 1.0 receivers).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Occasionally more transfers than granted are delivered.
4 participants