In this quickstart, you'll create a publisher microservice and two subscriber microservices to demonstrate how Dapr enables a publish-subcribe pattern. The publisher will generate messages of a specific topic, while subscribers will listen for messages of specific topics. See Why Pub-Sub to understand when this pattern might be a good choice for your software architecture.
Visit this link for more information about Dapr and Pub-Sub.
This quickstart includes one publisher:
- React front-end message generator
And three subscribers:
- Node.js subscriber
- Python subscriber
- C# subscriber
Dapr uses pluggable message buses to enable pub-sub, and delivers messages to subscribers in a Cloud Events compliant message envelope. in this case you'll use Redis Streams (enabled in Redis versions => 5). The following architecture diagram illustrates how components interconnect locally:
Dapr allows you to deploy the same microservices from your local machines to the cloud. Correspondingly, this quickstart has instructions for deploying this project locally or in Kubernetes.
- Dapr CLI with Dapr initialized
- Node.js version 14 or greater and/or Python 3.4 or greater and/or Asp.Net Core 6: You can run this quickstart with one or both or all microservices
In order to run the pub/sub quickstart locally, each of the microservices need to run with Dapr. Start by running message subscribers.
Note: These instructions deploy a Node subscriber and a Python subscriber, but if you don't have either Node or Python, feel free to run just one.
Clone this quickstarts repository to your local machine:
git clone [-b <dapr_version_tag>] https://github.com/dapr/quickstarts.git
Note: See https://github.com/dapr/quickstarts#supported-dapr-runtime-version for supported tags. Use
git clone https://github.com/dapr/quickstarts.git
when using the edge version of dapr runtime.
- Navigate to Node subscriber directory in your CLI:
cd node-subscriber
- Install dependencies:
npm install
- Run the Node subscriber app with Dapr:
dapr run --app-id node-subscriber --app-port 3000 node app.js
app-id
which can be any unique identifier for the microservice. app-port
, is the port that the Node application is running on. Finally, the command to run the app node app.js
is passed last.
- Open a new CLI window and navigate to Python subscriber directory in your CLI:
cd python-subscriber
- Install dependencies:
pip3 install -r requirements.txt
or
python -m pip install -r requirements.txt
- Run the Python subscriber app with Dapr:
dapr run --app-id python-subscriber --app-port 5001 python3 app.py
- Open a new CLI window and navigate to C# subscriber directory in your CLI:
cd csharp-subscriber
- Build Asp.Net Core app:
dotnet build
- Run the C# subscriber app with Dapr:
dapr run --app-id csharp-subscriber --app-port 5009 dotnet run csharp-subscriber.csproj
Now, run the React front end with Dapr. The front end will publish different kinds of messages that subscribers will pick up.
- Open a new CLI window and navigate to the react-form directory:
cd react-form
- Run the React front end app with Dapr:
npm run buildclient
npm install
dapr run --app-id react-form --app-port 8080 npm run start
This may take a minute, as it downloads dependencies and creates an optimized production build. You'll know that it's done when you see == APP == Listening on port 8080!
and several Dapr logs.
- Open the browser and navigate to "http://localhost:8080/". You should see a form with a dropdown for message type and message text:
- Pick a topic, enter some text and fire off a message! Observe the logs coming through your respective Dapr. Note that the Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C". Note that logs are showing up in the console window where you ran each one:
== APP == Listening on port 8080!
The Dapr CLI provides a mechanism to publish messages for testing purposes.
- Use Dapr CLI to publish a message:
dapr publish --publish-app-id react-form --pubsub pubsub --topic A --data-file message_a.json
- Optional: Try publishing a message of topic B. You'll notice that only the Node app will receive this message. The same is true for topic 'C' and the python app.
Note: If you are running in an environment without easy access to a web browser, the following curl commands will simulate a browser request to the node server.
curl -s http://localhost:8080/publish -H Content-Type:application/json --data @message_b.json
curl -s http://localhost:8080/publish -H Content-Type:application/json --data @message_c.json
- Cleanup
dapr stop --app-id node-subscriber
dapr stop --app-id python-subscriber
dapr stop --app-id csharp-subscriber
dapr stop --app-id react-form
- If you want to deploy this same application to Kubernetes, move onto the next step. Otherwise, skip ahead to the How it Works section to understand the code!
To run the same code in Kubernetes, first set up a Redis store and then deploy the microservices. You'll be using the same microservices, but ultimately the architecture is a bit different:
Dapr uses pluggable message buses to enable pub-sub, in this case Redis Streams (enabled in Redis version 5 and above) is used. You'll install Redis into the cluster using helm, but keep in mind that you could use whichever Redis host you like, as long as the version is greater than 5.
- Follow these steps to create a Redis store using Helm.
Note: Currently the version of Redis supported by Azure Redis Cache is less than 5, so using Azure Redis Cache will not work.
- Once your store is created, add the keys to the
redis.yaml
file in thedeploy
directory. Don't worry about applying theredis.yaml
, as it will be covered in the next step.Note: the
redis.yaml
file provided in this quickstart takes plain text secrets. In a production-grade application, follow secret management instructions to securely manage your secrets.
Now that the Redis store is set up, you can deploy the assets.
- In your CLI window, navigate to the deploy directory
- To deploy the publisher and three subscriber microservices, as well as the redis configuration you set up in the last step, run:
kubectl apply -f .
Kubernetes deployments are asyncronous. This means you'll need to wait for the deployment to complete before moving on to the next steps. You can do so with the following command:
kubectl rollout status deploy/node-subscriber
kubectl rollout status deploy/python-subscriber
kubectl rollout status deploy/csharp-subscriber
kubectl rollout status deploy/react-form
- To see each pod being provisioned run:
kubectl get pods
- To get the external IP exposed by the
react-form
microservice, run
kubectl get svc -w
This may take a few minutes.
Note: Minikube users cannot see the external IP. Instead, you can use
minikube service [service_name]
to access loadbalancer without external IP.
- Access the web form.
There are several different ways to access a Kubernetes service depending on which platform you are using. Port forwarding is one consistent way to access a service, whether it is hosted locally or on a cloud Kubernetes provider like AKS.
kubectl port-forward service/react-form 8000:80
This will make your service available on http://localhost:8000
Optional: If you are using a public cloud provider, you can substitue your EXTERNAL-IP address instead of port forwarding. You can find it with:
kubectl get svc react-form
- Create and submit messages of different types.
Open a web broswer and navigate to http://localhost:8000 and you see the same form as with the locally hosted example above.
Note: If you are running in an environment without easy access to a web browser, the following curl commands will simulate a browser request to the node server.
curl -s http://localhost:8000/publish -H Content-Type:application/json --data @message_a.json
curl -s http://localhost:8000/publish -H Content-Type:application/json --data @message_b.json
curl -s http://localhost:8000/publish -H Content-Type:application/json --data @message_c.json
- To see the logs generated from your subscribers:
kubectl logs --selector app=node-subscriber -c node-subscriber
kubectl logs --selector app=python-subscriber -c python-subscriber
kubectl logs --selector app=csharp-subscriber -c csharp-subscriber
- Note that the Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C" and the C# subscriber receives messages of type "A" and "B" and "C".
Once you're done, you can spin down your Kubernetes resources by navigating to the ./deploy
directory and running:
kubectl delete -f .
This will spin down each resource defined by the .yaml files in the deploy
directory, including the state component.
Now that you've run the quickstart locally and/or in Kubernetes, let's unpack how this all works. the app is broken up into two subscribers and one publisher:
Navigate to the node-subscriber
directory and open app.js
, the code for the Node.js subscriber. Here three API endpoints are exposed using express
. The first is a GET endpoint:
app.get('/dapr/subscribe', (_req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "A",
route: "A"
},
{
pubsubname: "pubsub",
topic: "B",
route: "B"
}
]);
});
This tells Dapr what topics in which pubsub component to subscribe to. When deployed (locally or in Kubernetes), Dapr will call out to the service to determine if it's subscribing to anything. The other two endpoints are POST endpoints:
app.post('/A', (req, res) => {
console.log("A: ", req.body.data.message);
res.sendStatus(200);
});
app.post('/B', (req, res) => {
console.log("B: ", req.body.data.message);
res.sendStatus(200);
});
These handle messages of each topic type coming through. Note that this simply logs the message. In a more complex application this is where you would include topic-specific handlers.
Navigate to the python-subscriber
directory and open app.py
, the code for the Python subscriber. As with the Node.js subscriber, we're exposing three API endpoints, this time using flask
. The first is a GET endpoint:
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [{'pubsubname': 'pubsub', 'topic': 'A', 'route': 'A'}, {'pubsubname': 'pubsub', 'topic': 'C', 'route': 'C'}]
return jsonify(subscriptions)
Again, this is how you tell Dapr what topics in which pubsub component to subscribe to. In this case, subscribing to topics "A" and "C" of pubsub component named 'pubsub'. Messages of those topics are handled with the other two routes:
@app.route('/A', methods=['POST'])
def a_subscriber():
print(f'A: {request.json}', flush=True)
print('Received message "{}" on topic "{}"'.format(request.json['data']['message'], request.json['topic']), flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
@app.route('/C', methods=['POST'])
def c_subscriber():
print(f'C: {request.json}', flush=True)
print('Received message "{}" on topic "{}"'.format(request.json['data']['message'], request.json['topic']), flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
Note: if flush=True
is not set, logs will not appear when running kubectl get logs...
. This is a product of Python's output buffering.
Navigate to the csharp-subscriber
directory and open Program.cs
, the code for the C# subscriber. We're exposing three API endpoints, this time using Asp.Net Core 6 Minimal API
.
Again, this is how you tell Dapr what topics in which pubsub component to subscribe to. In this case, subscribing to topics "A" and "B" and "C" of pubsub component named 'pubsub'. Messages of those topics are handled with these three routes:
using Dapr;
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
// Dapr configurations
app.UseCloudEvents();
app.MapSubscribeHandler();
app.MapPost("/A", [Topic("pubsub", "A")] (ILogger<Program> logger, MessageEvent item) => {
logger.LogInformation($"{item.MessageType}: {item.Message}");
return Results.Ok();
});
app.MapPost("/B", [Topic("pubsub", "B")] (ILogger<Program> logger, MessageEvent item) => {
logger.LogInformation($"{item.MessageType}: {item.Message}");
return Results.Ok();
});
app.MapPost("/C", [Topic("pubsub", "C")] (ILogger<Program> logger, Dictionary<string, string> item) => {
logger.LogInformation($"{item["messageType"]}: {item["message"]}");
return Results.Ok();
});
app.Run();
internal record MessageEvent(string MessageType, string Message);
Our publisher is broken up into a client and a server:
The client is a simple single page React application that was bootstrapped with Create React App. The relevant client code sits in react-form/client/src/MessageForm.js
where a form is presented to the users. As users update the form, React state is updated with the latest aggregated JSON data. By default the data is set to:
{
messageType: "A",
message: ""
};
Upon submission of the form, the aggregated JSON data is sent to the server:
fetch('/publish', {
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
method:"POST",
body: JSON.stringify(this.state),
});
The server is a basic express application that exposes a POST endpoint: /publish
. This takes the requests from the client and publishes them against Dapr. Express's built in JSON middleware function is used to parse the JSON out of the incoming requests:
app.use(express.json());
This allows us to determine which topic to publish the message with. To publish messages against Dapr, the URL needs to look like: http://localhost:<DAPR_URL>/publish/<PUBSUB_NAME>/<TOPIC>
, so the publish
endpoint builds a URL and posts the JSON against it. The POST request also needs to return a success code in the response upon successful completion.
const publishUrl = `${daprUrl}/publish/${pubsubName}/${req.body?.messageType}`;
await axios.post(publishUrl, req.body);
return res.sendStatus(200);
Note how the daprUrl
determines what port Dapr live on:
const daprUrl = `http://localhost:${process.env.DAPR_HTTP_PORT || 3500}/v1.0`;
By default, Dapr live on 3500, but if we're running Dapr locally and set it to a different port (using the --app-port
flag in the CLI run
command), then that port will be injected into the application as an environment variable.
The server also hosts the React application itself by forwarding default home page /
route requests to the built client code:
app.get('/', function (_req, res) {
res.sendFile(path.join(__dirname, 'client/build', 'index.html'));
});
Developers use a pub-sub messaging pattern to achieve high scalability and loose coupling.
Pub-sub is generally used for large applications that need to be highly scalable. Pub-sub applications often scale better than traditional client-server applications.
Pub-sub allows us to completely decouple the components. Publishers need not be aware of any of their subscribers, nor must subscribers be aware of publishers. This allows developers to write leaner microservices that don't take an immediate dependency on each other.
- Explore additional quickstarts.