RabbitMQ Stream tutorial - "Hello World!"
Introduction
Prerequisites
This tutorial assumes RabbitMQ is installed, running on
localhost
and the stream plugin enabled.
The standard stream port is 5552. In case you
use a different host, port or credentials, connections settings would require
adjusting.
Using docker
If you don't have RabbitMQ installed, you can run it in a Docker container:
docker run -it --rm --name rabbitmq -p 5552:5552 -p 15672:15672 -p 5672:5672 \
-e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS='-rabbitmq_stream advertised_host localhost' \
rabbitmq:3.13
wait for the server to start and then enable the stream and stream management plugins:
docker exec rabbitmq rabbitmq-plugins enable rabbitmq_stream rabbitmq_stream_management
Where to get help
If you're having trouble going through this tutorial you can contact us through the mailing list or discord community server.
RabbitMQ Streams was introduced in RabbitMQ 3.9. More information is available here.
"Hello World"
(using the Go Stream Client)
In this part of the tutorial we'll write two programs in Go; a producer that sends a single message, and a consumer that receives messages and prints them out. We'll gloss over some of the detail in the Go client API, concentrating on this very simple thing just to get started. It's the "Hello World" of RabbitMQ Streams.
The Go stream client library
RabbitMQ speaks multiple protocols. This tutorial uses RabbitMQ stream protocol which is a dedicated protocol for RabbitMQ streams. There are a number of clients for RabbitMQ in many different languages, see the stream client libraries for each language. We'll use the Go stream client provided by RabbitMQ.
RabbitMQ Go client 1.4 and later versions are distributed via go get.
This tutorial assumes you are using powershell on Windows. On MacOS and Linux nearly any shell will work.
Setup
First let's verify that you have the Go toolchain in PATH
:
go help
Running that command should produce a help message.
An executable version of this tutorial can be found in the RabbitMQ tutorials repository.
Now let's create the project:
mkdir go-stream
cd go-stream
go mod init github.com/rabbitmq/rabbitmq-tutorials
go get -u github.com/rabbitmq/rabbitmq-stream-go-client
Now we have the Go project set up we can write some code.
Sending
We'll call our message producer (sender) send.go
and our message consumer (receiver)
receive.go
. The producer will connect to RabbitMQ, send a single message,
then exit.
In
send.go
,
we need to import some packages:
import (
"bufio"
"fmt"
"github.com/rabbitmq/rabbitmq-stream-go-client/pkg/amqp"
"github.com/rabbitmq/rabbitmq-stream-go-client/pkg/stream"
"log"
"os"
)
then we can create a connection to the server:
env, err := stream.NewEnvironment(
stream.NewEnvironmentOptions())
The entry point of the stream Go client is the Environment
.
It is used for configuration of RabbitMQ stream publishers, stream consumers, and streams themselves.
It abstracts the socket connection, and takes care of protocol version negotiation and authentication and so on for us.
This tutorial assumes that stream publisher and consumer connect to a RabbitMQ node running locally, that is, on localhost.
To connect to a node on a different machine, simply specify target hostname or IP address on the EnvironmentOptions
.
Next let's create a producer.
The producer will also declare a stream it will publish messages to and then publish a message:
streamName := "hello-go-stream"
env.DeclareStream(streamName,
&stream.StreamOptions{
MaxLengthBytes: stream.ByteCapacity{}.GB(2),
},
)
producer, err := env.NewProducer(streamName, stream.NewProducerOptions())
if err != nil {
log.Fatalf("Failed to create producer: %v", err)
}
err = producer.Send(amqp.NewMessage([]byte("Hello world")))
if err != nil {
log.Fatalf("Failed to send message: %v", err)
}
The stream declaration operation is idempotent: the stream will only be created if it doesn't exist already.
A stream is an append-only log abstraction that allows for repeated consumption of messages until they expire. It is a good practice to always define the retention policy. In the example above, the stream is limited to be 5 GiB in size.
The message content is a byte array. Applications can encode the data they need to transfer using any appropriate format such as JSON, MessagePack, and so on.
When the code above finishes running, the producer connection and stream-system connection will be closed. That's it for our producer.
Each time the producer is run, it will send a single message to the server and the message will be appended to the stream.
The complete send.go file can be found on GitHub.
Sending doesn't work!
If this is your first time using RabbitMQ and you don't see the "Sent" message then you may be left scratching your head wondering what could be wrong. Maybe the broker was started without enough free disk space (by default it needs at least 50 MB free) and is therefore refusing to accept messages. Check the broker log file to see if there is a resource alarm logged and reduce the free disk space threshold if necessary. The Configuration guide will show you how to set
disk_free_limit
.Another reason may be that the program exits before the message makes it to the broker. Sending is asynchronous in some client libraries: the function returns immediately but the message is enqueued in the IO layer before going over the wire. The sending program asks the user to press a key to finish the process: the message has plenty of time to reach the broker. The stream protocol provides a confirm mechanism to make sure the broker receives outbound messages, but this tutorial does not use this mechanism for simplicity's sake.
Receiving
The other part of this tutorial, the consumer, will connect to a RabbitMQ node and wait for messages to be pushed to it. Unlike the producer, which in this tutorial publishes a single message and stops, the consumer will be running continuously, consume the messages RabbitMQ will push to it, and print the received payloads out.
Similarly to send.go
, receive.go
will need to import some packages:
import (
"bufio"
"fmt"
"github.com/rabbitmq/rabbitmq-stream-go-client/pkg/amqp"
"github.com/rabbitmq/rabbitmq-stream-go-client/pkg/stream"
"log"
"os"
)
When it comes to the initial setup, the consumer part is very similar the producer one; we use the default connection settings and declare the stream from which the consumer will consume.
env, err := stream.NewEnvironment(stream.NewEnvironmentOptions())
if err != nil {
log.Fatalf("Failed to create environment: %v", err)
}
streamName := "hello-go-stream"
env.DeclareStream(streamName,
&stream.StreamOptions{
MaxLengthBytes: stream.ByteCapacity{}.GB(2),
},
)
Note that the consumer part also declares the stream. This is to allow either part to be started first, be it the producer or the consumer.
We use the Consumer
struct to instantiate the consumer and ConsumerOptions
struct to configure it.
We provide a messageHandler
callback to process delivered messages.
SetOffset
defines the starting point of the consumer.
In this case, the consumer starts from the very first message available in the stream.
messagesHandler := func(consumerContext stream.ConsumerContext, message *amqp.Message) {
fmt.Printf("Stream: %s - Received message: %s\n", consumerContext.Consumer.GetStreamName(),
message.Data)}
consumer, err := env.NewConsumer(streamName, messagesHandler,
stream.NewConsumerOptions().SetOffset(stream.OffsetSpecification{}.First()))
if err != nil {
log.Fatalf("Failed to create consumer: %v", err)
}
The complete receive.go file can be found on GitHub.
Putting It All Together
In order to run both examples, open two terminal (shell) tabs.
We need to pull dependencies first:
go get -u
Both parts of this tutorial can be run in any order, as they both declare the stream. Let's run the consumer first so that when the first publisher is started, the consumer will print it:
go run receive.go
Then run the producer:
go run send.go
The consumer will print the message it gets from the publisher via RabbitMQ. The consumer will keep running, waiting for new deliveries. Try re-running the publisher several times to observe that.
Streams are different from queues in that they are append-only logs of messages that can be consumed repeatedly. When multiple consumers consume from a stream, they will start from the first available message.