Samples and resources of how to design WebApi with .NET
- WebApi with .NET Core
Feel free to create an issue if you have any questions or request for more explanation or samples. I also take Pull Requests!
đź’– If this repository helped you - I'd be more than happy if you join the group of my official supporters at:
👉 Github Sponsors
- Install .NET Core SDK 3.1 from link.
- Install one of IDEs:
- Visual Studio - link - for Windows only. Community edition is available for free,
- Visual Studio for Mac - link - for MacOS only. Available for free,
- Visual Studio Code- link with C# plugin - Cross-platform support. Available for free,
- Rider - link - cross-platform support. Paid, but there are available free options (for OpenSource, students, user groups etc.)
From the documentation: "Routing is responsible for matching incoming HTTP requests and dispatching those requests to the app's executable endpoints."
Saying differently routing is responsible for finding exact endpoint based on the request parameters - usually based on the URL pattern matching.
Endpoint executes the logic that creates an HTTP response based on request.
To use routing and endpoints it's needed to call UseRouting
and UseEndpoints
extension method on app builder in Startup.Configure
method. That will register routing in middleware pipeline.
Note that those methods should be registered in the order as presented above. If the order is changed then it won't be registered properly.
Templates add flexibility to supported URL definition.
The simplest option is static URL where you have just URL, eg:
/Reservations/List
/GetUsers
/Orders/ByStatuses/Closed
Static URLs are fine for the list endpoints, but if we'd like to get a list of records.
To allow dynamic matching (eg. reservation by Id) we need to use parameters. They can be added using {parameterName}
syntax. eg.
/Reservations/{id}
/users/{id}/orders/{orderId}
They don't need to be only used instead of concrete URL part. You can also do eg.:
/Reservations?status={reservationStatus}&user={userId}
- this will get parameters from the query string and match eg./Reservations?status=Open&userId=123
and will havestatus
parameter equal toOpen
anduserId
equal to123
,/Download/{fileName}.{extension}
- this will match eg./Download/testFile.txt
and end up with two route data parameters -fileName
withtestFile
value andextension
withtxt
accordingly,/Configuration/{entityType}Dictionary
- this will match/Configuration/OrderStatusDictionary
and will haveentityType
parameter withOrderStatus
value.
You can also add catch-all parameters - {**parameterName}
, that can be used as fallback when no route was found:
/Reservations/{id}/{**reservationPath}
- this will match eg./Reservations/123/changeStatus/confirmed
and will havereservationPath
parameter withchangeStatus/confirmed
value
It's also possible to make the parameter optional by adding ?
after its name:
/Reservations/{id?}
- this will match both/Reservations
and/Reservation/123
routes
Route template parameters can contain constraints to narrow down the matched results. To use it you need to add constraint name after parameter name {prameter:constraintName}
.
There is a number of predefined route constraints, eg:
/Reservations/{id:guid}
- will match eg./Reservations/632863d2-5cbf-4c9f-92e1-749d264d965e
but wont' match eg./Reservations/123
,/Reservations/top/{limit:int:minlength(1):maxLength(10)
- this will allow to pass integers between1
and10
forlimit
parameter. So it will allow to get at most top 10 reservations,/Inbox?from={fromEmailAddress:regex(\\[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4})}
- regex can be also used to eg. check email address or provide more advanced format check. This will match/Inbox?from=john.doe@company.com
and will havefromEmailAddress
parameter withjohn.doe@company.com
value,- see more constraints examples in route constraint documentation.
Note - failing constraint will result with 400 - BadRequest
status code, however, the messages are generic and not user friendly. So if you'd like to make them more related to your business case - it's suggested to do move it to validation inside the code.
You can also define your custom constraint. The sample use case would be when you want to provide the validation for your business id format.
See sample that validates if reservation id is built from 3 non-empty parts split by |
;
public class ReservationIdConstraint : IRouteConstraint
{
public bool Match(
HttpContext httpContext,
IRouter route,
string routeKey,
RouteValueDictionary values,
RouteDirection routeDirection)
{
if (routeKey == null)
{
throw new ArgumentNullException(nameof(routeKey));
}
if (values == null)
{
throw new ArgumentNullException(nameof(values));
}
if (!values.TryGetValue(routeKey, out var value) && value != null)
{
return false;
}
var reservationId = Convert.ToString(value, CultureInfo.InvariantCulture);
return reservationId.Split("|").Where(part => !string.IsNullOrWhiteSpace(part)).Count() == 3;
}
}
You need to register it in Startup.ConfigureServices
in AddRouting
method:
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
// registers controllers in dependency injection container
services.AddControllers();
services.AddRouting(options =>
{
options.ConstraintMap.Add("reservationId", typeof(ReservationIdConstraint));
});
}
// (...)
}
Then you can use it to in route:
/Reservations/{id:reservationId}
- this will match/Reservations/RES|123|01
(and getid
parameter with valueRES|123|01
) but wont't match/Reservations/123
.
Routing is split into the following steps:
- request URL parsing
- perform matching against registered routes (it's done in parallel, so the order of registration doesn't matter)
- from matching routes, remove all that do not match routes constraints (eg. route parameter defined as int was not numeric)
- select single best matching (the most concrete one) if possible, from the left routes. If there are still more than one matches - the exception is being thrown. If there was only single match but value does not match constraint then exception will be thrown.
Having eg. following routes:
/Clients/List
/Clients/{id}
/Reservations/{id:alpha}
/Reservations/{id:int}
/Reservations/List
and trying to match /Reservation/List
the routing process will find matching templates so:
/Reservations/{id:alpha}
/Reservations/{id:int}
/Reservations/List
It matched the Reservations
part and then both {id}
routes (as List
could be just string id text) and concrete part List
.
Then constraints will be verified and we'll end up with two routes (as {id:int}
does not match because List
is not an integer).
/Reservations/{id:alpha}
/Reservations/List
From this set both are matching, but List
is more concrete.
Accordingly:
- trying to match
Reservations/abcde
routing will match/Reservations/{id:alpha}
route, - trying to match
Reservations/123
routing will match/Reservations/{id:int}
route.
ASP.NET Core allows to define raw endpoints without the need to use controllers. They can be defined inside UseEndpoints
method, by calling UseGet
, UsePost
etc. methods:
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// registers routing in middleware pipeline
app.UseRouting();
// defines endpoints to be routed
app.UseEndpoints(endpoints =>
{
endpoints.MapGet("/Reservations/{id}", async context =>
{
var name = context.Request.RouteValues["id"];
await context.Response.WriteAsync($"Reservation with {id}!");
});
});
}
}
Using endpoints currently requires a lot of bare-bone code. This will change with .NET 5 where it will get a set of useful methods that will make it first-class citizen. See more in accepted API review: link.
Http requests can be mapped to controller with two ways: conventional and through attributes
Conventional is done by calling MapControllerRoute
method inside UseEndpoints
. It allows to provide route template (pattern
), name and controller action mapping.
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// registers routing in middleware pipeline
app.UseRouting();
// defines endpoints to be routed
app.UseEndpoints(endpoints =>
{
// defines concrete routing to single controller action
endpoints.MapControllerRoute(name: "blog",
pattern: "Reservations/{id}",
defaults: new { controller = "Reservations", action = "Get" });
// defines "catch-all" routing that will route all requests
// matching `/Controller/Action` or `/Controller/Action/id`
endpoints.MapControllerRoute(name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
});
}
}
Important thing to note is controllers should have the Controller
suffix in the name (eg. ReservationsController
), but routes should be defined without it (so Reservations
).
Controllers are derived from the MVC pattern concept. They are responsible for orchestration between requests (inputs) and models. Routing can be defined by putting attributes on top of method and controller definition.
If you want to use Controllers then you should also call AddControlers
in configure services (to register them in Dependency Container) and MapControllers
inside UseEndpoints
to map controllers routes configuration.
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
// registers controllers in dependency injection container
services.AddControllers();
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// registers routing in middleware pipeline
app.UseRouting();
// defines endpoints to be routed
app.UseEndpoints(endpoints =>
{
// maps controllers routes to endpoints
endpoints.MapControllers();
});
}
}
Route attribute
The most generic attribute is [Route]
. It routes that will direct to the method that it's marking.
public class ReservationsController : Controller
{
[Route("")]
[Route("Reservations")]
[Route("Reservations/List")]
[Route("Reservations/List/{status?}")]
public IActionResult List(string status)
{
//(...)
}
[Route("Reservations/Summary")]
[Route("Reservations/Summary/{userId?}")]
public IActionResult Summary(int? userId)
{
// (...)
}
}
In this example routes:
/
,/Reservations
,/Reservations/List
,/Reservations/List/Open
will be routed toList
method,/Reservations/Summary
,Reservations/Summary/123
will be routed toSummary
method.
Important note is that you should not use action
, area
, controller
, handler
, page
as route template variable (eg. /Reservations/{page}
). Those names are reserved for the internals of routing logic. Using them will make routing fail.
HTTP methods attributes
ASP.NET Core provides also more specific attributes [HttpGet]
, [HttpPost]
, [HttpPut]
, [HttpDelete]
, [HttpHead]
, [HttpPatch]
representing HTTP methods. Besides the URL routing they also perform matching based on the HTTP method.
Normally using them you should add [Route]
attribute on a controller that will add prefix for all the routes defined by HTTP verbs attributes.
Sample of the most common CRUD controller definition:
[Route("api/[controller]")]
[ApiController]
public class ReservationsController : ControllerBase
{
[HttpGet]
public IActionResult List([FromQuery] string filter)
{
//(...)
}
[HttpGet("{id}")]
public IActionResult Get(int id)
{
// (...)
}
[HttpPost]
public IActionResult Create([FromBody] CreateReservation request)
{
// (...)
}
[HttpPut("{id}")]
public IActionResult Put(int id, [FromBody] UpdateReservation request)
{
// (...)
}
[HttpDelete("{id}")]
public IActionResult Delete(int id)
{
// (...)
}
}
Using [Route("api/[controller]")]
will define route based on the controller name - in this case it will be /api/Reservations
. By convention WebApi routes usually start with a /api
prefix. Prefix existence is optional and can have a different value. If you'd like you could also add suffix eg. [Route("api/[controller]/open")]
if eg. you'd like to have dedicated controller for open reservations.
The benefit of using [controller]
is that when you rename controller the route will be also updated. If you want to avoid accidental route name change then you should use concrete route eg. [Route("api/reservations")]
Having that:
GET /api/Reservations
will be routed toList
method. Value for thefilter
parameter, because of[FromQuery]
attribute will be mapped from request query string. ForGET /api/Reservations?filter=open
it will haveopen
value, for default routeGET /api/Reservations
it will benull
,GET /api/Reservations/123
will be routed toGet
method. Value of theid
parameter will be taken by convention from the route parameter,POST /api/Reservations/123
will be routed toCreate
method. Value for therequest
parameter, because of[FromBody]
attribute will be mapped from request body (so eg. JSON sent from client),PUT /api/Reservations/123
will be routed toUpdate
method,DELETE /api/Reservations/123
will be routed toDelete
method.
It's not mandatory to use route prefix. Most of the time it's useful, but when you have nesting inside the API then it's worth setting up it manually eg.
[ApiController]
public class UserReservationsController : ControllerBase
{
[HttpGet("api/users/{userId}/reservations")]
public IActionResult List(int userId, [FromQuery] string filter)
{
//(...)
}
[HttpGet("api/users/{userId}/reservations/{id}")]
public IActionResult Get(int userId, int id)
{
// (...)
}
[HttpPost("api/users/{userId}/reservations/{id}")]
public IActionResult Create(int userId, [FromBody] CreateReservation request)
{
// (...)
}
[HttpPut("api/users/{userId}/reservations/{id}/status")]
public IActionResult Put(int userId, int id, [FromBody] UpdateReservationStatus request)
{
// (...)
}
}
- Microsoft Documentation - Routing in ASP.NET Core
- Microsoft Documentation - Routing to controller actions in ASP.NET Core
- DotNetMentors - http://dotnetmentors.com/mvc/explain-asp-net-mvc-routing-with-example.aspx
- StrathWeb - Dynamic controller routing in ASP.NET Core 3.0
- Andrew Lock - Accessing route values in endpoint middleware in ASP.NET Core 3.0
Let's go back in time. In 2000 Roy Fielding wrote doctoral dissertation titled "Architectural Styles and the Design of Network-based Software Architectures". This dissertation gave rise to "REpresentational State Transfer" - REST. Roy created REST as an architectural style based on the principles that make the Internet so successful. The World Wide Web runs itself on HTTP, which has a number of conventions that provide the basis for scalability, fault tolerance, and loose coupling. REST and HTTP are not the same thing, but REST fully embraces HTTP. It means that it uses verbs, status codes, headers, and resource identified as URI in order to fulfill the constraints that together compose the so-called RESTful style. What are those constraints?
REST, like any other architectural style, describes constraints, that composed together define the basis of RESTful style.
This constraint just mainly specifies that there's a distinction between a client and a server. This separation allows the components to evolve independently thus improving portability and scalability.
Each request must have all the information necessary for its correct completion. It means that all the state that's contained for a given web request is contained within the request itself as a part of the URI, query string parameters, body, or headers. Since there is no session related dependency, each server can handle any request thus API can be easily scaled. Removing all server-side state synchronization logic also makes REST APIs less complex.
The server should label what data within a response to a request can be cached and what cannot. If a response can be cached, then a client cache is given the rights to reuse that response data for later, equivalent requests. Following this constraint give the potential to partially or completely eliminate some interactions, thus improving performance and scalability and also decrease latency.
The client can make a request and the response could come from a web server, a load balancer, a cache, etc. For the client, it doesn't really matter where the data is coming from as long as it gets the requested information. In other words, before the server completes the response, it can perform additional operations that the client does not need to know.
This is the only optional constraint. Most of the time, the server will be sending the static representations of resources in the form of XML or JSON, but on demand, it can send additional code (f.e. javascript) that can be executed on the client side. This simplifies clients by reducing the number of features required to be pre-implemented.
The server should provide an API that will be well understood by all applications communicating with it. By designing one interface, we should respond to the needs of all applications that use it. In order to obtain such a uniform interface, four additional constraints must be met.
On the basis of a single request, the server can identify the resource it concerns. For that purpose most often the Uniform Resource Identifier - URI is used. It distinguishes resource from any other, and through it interaction with that resource take place. In the example we have address that is pointing on specific employee with id 123. This address is the URI, which is identifier and the returned employee is the resource.
GET http://example.org/employees/123
200 OK
{
"employeeId": 123,
"firstName": "John",
"lastName": "Doe"
}
The server can return reponse in various formats (media types) like HTML, XML, JSON etc. That format is the representation of the identified resource, that the client can understand and manipulate. It is possible for the client to request a specific representation that fits it needs. This is accomplished via the Accept header.
GET http://example.org/employees/123
Accept: application/xml
200 OK
<?xml version="1.0" encoding="UTF-8"?>
<employee>
<employeeId type="integer">123</employeeId>
<firstName>John</firstName>
<lastName>Doe</lastName>
</employee>
Clients are also allowed to indicate their preferred representation when sending data to the server. This is accomplished via the Content-type header. The server response should not be affected by the choosen format.
POST http://example.org/employees
Content-type: application/json
{
"firstName": "John",
"lastName": "Doe"
}
201 Created
Location: http://example.org/employees/123
A message, which is a request or a response, is being considered as self-descriptive when it contains all the information necessary to complete the task. In other words it should contains all the information that the recipient needs to understand it. Down bellow is an example of self-descriptive message. It contains information about protocol, host, which type of action need to be performed (HTTP method), and desired resource representation to be returned (Accept header). Such a message will be well understood by the server.
GET /employees/123 HTTP/1.1
Host: example.org
Accept: application/json
The server can respond accordingly. That message is also self-descriptive. It tells the client that operation was sucessfull by returning appriopriate status code. It also tells how to interpret the message body by specyfing Content-Type header.
HTTP/1.1 200 OK
Content-Type: application/json
{
"employeeId": 123,
"firstName": "John",
"lastName": "Doe"
}
Together, the first three uniform interface constraints imply the fourth. It can be summarise as that: sending self-desciptive messages to uniquely identifying resources, using representations, changes the state of the application. This constraint allows to compare the RESTful API to a website. As a website is a collection of links leading to subsequent subpages, HATEOAS informs that the same can be done with API. Also think of it as an situation in the office when you want to start a new business. You can't just go there and "POST" a new company. You must submit an application for creating a new company and then you will receive anwser like "Thank you for submitting an aplication. Here are the next possible steps that you can perform: cancellation of the application, address change, financing".
POST http://example.org/companies
{
"name": "NewOne",
"address": "Example 5",
"owner": {
"firstName": "John",
"lastName": "Doe"
}
}
HTTP/1.1 201 Created
{
"companyId": 1234,
"name": "NewOne",
"address": "Example 5",
"owner": {
"firstName": "John",
"lastName": "Doe"
},
"_links":{
"self":{
"href": "http://example.org/companies/1234",
"method": "GET"
},
"cancellation":{
"href": "http://example.org/companies/1234",
"method": "DELETE"
}
}
}
By default in .NET Core there are six levels of logging (available through LogLevel enum):
Trace
(value0
) - the most detailed and verbose information about the application flow,Debug
(1
) - useful information during the development process (eg. local environment bug investigation),Information
(2
) - usually important information about the application flow that can be useful for diagnostics and flow,Warning
(3
) - potential unexpected application event or error that's not blocking flow (eg. operation was successfully saved to the database but notification failed) or transient error occurred but was succeeded after retry),Error
(4
) - unexpected application error - eg. no record found to update, database timeout, argument exception etc.,Critical
(5
) - informing about critical events that require immediate action like application or system crash, end of disk space or database in the irrecoverable state,None
(6
) - means no logs at all, used usually in the configuration to disable logging for selected category.
It's important to keep in mind that Trace
and Debug
should not be used on production, and should be used only for development/debugging purposes (Trace
is by default disabled).
Because of their characteristic, they may contain sensitive application information to be effective (eg. system secrets, PII/GDPR Data). Because of that, we need to be sure that on production environment they are disabled as that may end up with security leak.
As they're also verbose, then keeping them on the production system may increase significantly cost of logs storage. Plus too many logs make them unreadable and hard to read.
Each logger instance needs to have an assigned category. Categories allow to group logs messages (as a category will be added to each log entry). By convention category should be passed as the type parameter of ILogger. Usually it's the class that we're injecting logger, eg.
[Route("api/Reservations")]
public class ReservationsController: Controller
{
private readonly ILogger<ReservationsController> logger;
public ReservationsController(ILogger<ReservationsController> logger)
{
this.logger = logger;
}
[HttpPost]
public async Task<IActionResult> Create([FromBody] CreateReservationRequest request)
{
var reservationId = Guid.NewGuid();
// (...)
logger.LogInformation("Created reservation with {ReservationId}", reservationId);
return Created("api/Reservations", reservationId);
}
}
Log category created with type parameter will contain full type name (so eg. LoggingSamples.Controllers.ReservationController
).
It's also possible (however not recommended) to define that through ILoggerFactory CreateLogger(string categoryName)
method:
[Route("api/Reservations")]
public class ReservationsController: Controller
{
private readonly ILogger logger;
public ReservationsController(ILoggerFactory loggerFactory)
{
this.logger = logger.CreateLogger("LoggingSamples.Controllers.ReservationController");
}
}
Categories are useful for searching through logs and diagnose issues. As mentioned in the previous section - it's also possible to define in different log levels for configuration.
Eg. if you have a default log level Information
and you need to investigate issues occurring in a specific controller (eg. ReservationsController
) then you can change the log level to Debug
for a dedicated category.
{
"Logging": {
"LogLevel": {
"Default": "Information",
"LoggingSamples.Controllers.ReservationController": "Debug"
}
}
}
Then for all categories but LoggingSamples.Controllers.ReservationController
you'll have logs logged for Information and above (Information
, Warning
, Error
, Critical
) and for LoggingSamples.Controllers.ReservationController
also Debug
.
The other example is to disable logs from selected category - eg.
- because you noticed that is logging some sensitive information and you need quickly to change that,
- you want to mute some unimportant system logs,
- you want to make sure that logs from a specific category (eg.
LoggingSamples.Controllers.AuthenticationController
) won't be ever logged on prod.
{
"Logging": {
"LogLevel": {
"Default": "Information",
"LoggingSamples.Controllers.AuthenticationController": "None"
}
}
}
Besides categories, it's possible to define logging scopes. They allow having add set of custom information to each log entry.
Scopes are disabled by default - if you'd like to use them then you need to toggle them on in configuration:
{
"Logging": {
"IncludeScopes": true,
"LogLevel": {
"Default": "Information"
}
}
}
Having that you can use ILogger.BeginScope method to define one or more logging scopes.
The first potential use case is to always add entity type and identifier to all logs in business logic to not need to add it in each entry. Eg. reservation id during its update. You can also create nested scopes.
[HttpPut]
public async Task<IActionResult> Create(Guid id, [FromBody] UpdateReservationRequest request)
{
using(logger.BeginScope("For {EntityType}", "Reservation")
{
using(logger.BeginScope("With {EntityId}", id)
{
logger.LogInformation("Starting reservation update process for {request}", request);
// (...)
}
}
return OK();
}
You can create also scopes with aspect programming way - so eg. in middleware to inject scopes globally.
An example would be injecting as logging scope information from request eg. client IP, user id.
Sample below shows how to inject CorellationID into logger scope.
public class CorrelationIdMiddleware
{
private readonly RequestDelegate next;
private readonly ILogger logger;
public CorrelationIdMiddleware(RequestDelegate next, ILoggerFactory loggerFactory)
{
this.next = next;
logger = loggerFactory.CreateLogger<CorrelationIdMiddleware>();
}
public async Task Invoke(HttpContext context /* other scoped dependencies */)
{
var correlationID = Guid.NewGuid();
using (logger.BeginScope($"CorrelationID: {CorrelationID}", correlationID))
{
await next(context);
}
}
}
The other option for grouping logs is log events. They are used normally to group them eg. by purpose - eg. updating an entity, starting controller action, not finding entity etc. To define them you need to provide a standardized list of int event ids. Eg.
public class LogEvents
{
public const int InvalidRequest = 911;
public const int ConflictState = 112;
public const int EntityNotFound = 1000;
}
Sample usage:
[HttpPut]
public IActionResult Update([FromBody] UpdateReservation request)
{
logger.LogInformation("Initiating reservation creation for {seatId}", request?.SeatId);
if (request?.SeatId == null || request?.SeatId == Guid.Empty)
{
logger.LogWarning(LogEvents.InvalidRequest, "Invalid {SeatId}", request?.SeatId);
return BadRequest("Invalid SeatId");
}
if (request?.ReservationId == null || request?.ReservationId == Guid.Empty)
{
logger.LogWarning(LogEvents.InvalidRequest, "Invalid {ReservationId}", request?.ReservationId);
return BadRequest("Invalid ReservationId");
}
// (...)
return Created("api/Reservations", reservation.Id);
}
- Microsoft Docs - Logging in ASP.NET Core
- Microsoft Docs - High-performance logging with LoggerMessage in ASP.NET Core
- Steve Gordon - High-Performance Logging in .NET Core
- Software Engineering StackExchange - Benefits of Structured Logging vs basic logging
- Message Templates
- Andre Newman - Tools and Techniques for Logging Microservices
- Siva Prasad Rao Janapati - Distributed Logging Architecture for Microservices
- Szymon Warda - Stop trying to mock the ILogger methods
- Andrew Lock - How to include scopes when logging exceptions in ASP.NET Core
- Rico Suter - Logging with ILogger in .NET: Recommendations and best practices
- Stephen Cleary - Microsoft.Extensions.Logging
- Stephen Cleary - A New Pattern for Exception Logging
- Serilog Documentation
- Nicholas Blumhardt - Setting up Serilog in ASP.NET Core 3
- Ben Foster - Serilog Best Practices
- Alfus Jaganathan - Scoped logging using Microsoft Logger with Serilog in .Net Core Application
- HumanKode - Logging with ElasticSearch, Kibana, ASP.NET Core and Docker
- Than Le - Building logging system in Microservice Architecture with ELK stack and Serilog .NET Core
- Marco de Sanctis - Monitor ASP.NET Core in ELK through Docker and Azure Event Hubs
- Microsoft Docs - Logging with Elastic Stack
- Ali Mselmi - Structured logging with Serilog and Seq and ElasticSearch under Docker
- Logz.io - Complete Guide to ELK Stack
- Logit.io - How to install ELK
- Logz.io - Best practices for managing ElasticSearch indices
- Andrew Lock - Writing logs to Elasticsearch with Fluentd using Serilog in ASP.NET Core
- Elastic Documentation - Install ElasticSearch with Docker
- AWS User Group Bengaluru - Log analytics with ELK stack
- Steve Gordon - ASP.NET Core Correlation IDs
- Steve Gordon - CorrelationId NuGet Package
- Vicenç GarcĂa - Capturing and forwarding correlation IDs in ASP.NET Core
- Vicenç GarcĂa - Capturing and forwarding correlation IDs in ASP.NET Core, the easy way
To setup docker configuration you need to create Dockerfile (usually it's located in the root project folder).
Docker allows to define complete build and runtime setup. It allows also multistage build. Having that, you can use in first stage different tools for building the binaries. Then in the next stage you can just copy the prepared binaries and host them in the final image. Thank to that the final docker image is smaller and more secure as it doesn't contain eg. source codes and build tools.
Microsoft provides docker images that can be used as a base for the Docker configuration. You can choose from various, but usually you're using either:
mcr.microsoft.com/dotnet/core/sdk:3.1
- Debian based,mcr.microsoft.com/dotnet/core/sdk:3.1-alpine
- Alpine based, that are trimmed to have only basic tools preinstalled.
It's recommended to start with alpine
as it's much smaller and use the regular if you need more advanced configuration that's lacking in alpine. There are also windows containers, but they're rarely used. For most of the cases linux based will be the first option to choose.
See example of DOCKERFILE
:
########################################
# First stage of multistage build
########################################
# Use Build image with label `builder
########################################
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-alpine AS builder
# Setup working directory for project
WORKDIR /app
# Copy project files
COPY *.csproj ./
# Restore nuget packages
RUN dotnet restore
# Copy project files
COPY . ./
# Build project with Release configuration
# and no restore, as we did it already
RUN dotnet build -c Release --no-restore
## Test project with Release configuration
## and no build, as we did it already
#RUN dotnet test -c Release --no-build
# Publish project to output folder
# and no build, as we did it already
RUN dotnet publish -c Release --no-build -o out
########################################
# Second stage of multistage build
########################################
# Use other build image as the final one
# that won't have source codes
########################################
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-alpine
# Setup working directory for project
WORKDIR /app
# Copy published in previous stage binaries
# from the `builder` image
COPY --from=builder /app/out .
# Set URL that App will be exposed
ENV ASPNETCORE_URLS="http://*:5000"
# sets entry point command to automatically
# run application on `docker run`
ENTRYPOINT ["dotnet", "DockerContainerRegistry.dll"]
All modern IDE allows to debug ASP.NET Core application that are run inside the local docker. See links:
- Rider - Debugging ASP.NET Core apps in a local Docker container
- Visual Studio Code - ASP.NET Core in a container
- Niranjan Singh - How to enable docker support ASP.NET applications in Visual Studio
- Download Docker
- Docker Hub
- Microsoft Docker images
- Vladislav Supalov - Docker ARG, ENV and .env - a Complete Guide
- Microsoft Documentation - ARM Templates
- Microsoft Github - Learning ARM
- Microsoft Documentation - Azure CLI - ARM Deployments
- Microsoft Documentation - Tutorial: Build a custom image and run in App Service from a private registry
- Microsoft Documentation - What if deployment
- Microsoft Documentation - ARM Templates Reference
- Microsoft Documentation - Quickstart: Set and retrieve a secret from Azure Key Vault using Azure CLI
- Microsoft Documentation - Use Azure Key Vault to pass secure parameter value during deployment
Azure Devops has built in AzureCLI@1
task that's able to run Azure CLI commands.
To use it, it's needed to configure Azure Resource Manager comnnection. It's possible to do either with default service principal or by setting up custom one with set of permissions.
To allow new resource group creation you need to add at least Microsoft.Resources/subscriptions/resourcegroups/write
permission on the subscription level. You can do that through Access Control (IAM)
section (Home => Subscriptions => Select subscription => IAM).
Then you need to assign role that has that permission (eg. Contributor
but beware - using it might be dangerous, as it has a high level access permissions, someone with access to Azure Devops can get access to subscription management). You can define your own custom role with minimum set of permissions.
Sample usage would be, creating new resource group and Azure Container Registry:
parameters:
vmImageName: 'ubuntu-16.04'
resourceGroupName: ''
imageRepository: ''
subscription: ''
stages:
- stage: create_azure_group_and_azure_docker_registry
displayName: Create Azure Group And Azure Docker Registry
jobs:
- job: create_azure_group_and_azure_docker_registry
pool:
vmImageName: ${{ parameters.vmImageName }}
steps:
- task: AzureCLI@1
displayName: Create Resource Group
inputs:
azureSubscription: ${{ parameters.subscription }}
scriptLocation: 'inlineScript'
inlineScript: az group create --name ${{ parameters.resourceGroupName }} --location northeurope
- task: AzureCLI@1
displayName: Create Azure Container Registry
inputs:
azureSubscription: ${{ parameters.subscription }}
scriptLocation: 'inlineScript'
inlineScript: az acr create --resource-group ${{ parameters.resourceGroupName }} --name ${{ parameters.imageRepository }} --sku Basic
Sample usage of this template would look like:
variables:
vmImageName: 'ubuntu-16.04'
imageRepository: dockercontainerregistrysample
dockerRegistryServiceConnection: AzureDockerRegistry
resourceGroupName: WebApiWithNetCore
subscription: AzureWebApiWithNetCore
stages:
- template: AzureDevOps/Stages/CreateAzureGroupAndAzureDockerRegistry.yml
parameters:
imageRepository: $(imageRepository)
resourceGroupName: $(resourceGroupName)
subscription: $(subscription)
vmImageName: $(vmImageName)
Links:
- Microsoft Documentation - How to: Use the portal to create an Azure AD application and service principal that can access resources
- Alessandro Moura - Creating a service connection in Azure DevOps
- Barbara 4Bes - Step by step: Manually Create an Azure DevOps Service Connection to Azure
- Microsoft Documentation - Azure CLI Task
- Microsoft Documentation - Service connections
Setup the universal template as follows (with eg. filename BuildAndPublishDocker.yml
):
parameters:
- name: imageRepository
- name: dockerRegistryServiceConnection
- name: tag
type: string
- name: vmImageName
default: 'ubuntu-16.04'
- name: dockerfilePath
default: DOCKERFILE
######################################################
# Stage definition
######################################################
stages:
- stage: build_and_push_docker_image
displayName: Build and push Docker image
jobs:
- job: Build
displayName: Build job
pool:
vmImage: ${{ parameters.vmImageName }}
steps:
- checkout: self
- task: Docker@2
displayName: Build a Docker image
inputs:
command: build
repository: ${{ parameters.imageRepository }}
dockerfile: ${{ parameters.dockerfilePath }}
containerRegistry: ${{ parameters.dockerRegistryServiceConnection }}
tags: |
${{ parameters.tag }}
- task: Docker@2
displayName: Push a Docker image to container registry
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
inputs:
command: push
repository: ${{ parameters.imageRepository }}
dockerfile: ${{ parameters.dockerfilePath }}
containerRegistry: ${{ parameters.dockerRegistryServiceConnection }}
tags: |
${{ parameters.tag }}
Before running the pipeline, you need to manually using Azure Cloud Shell
:
-
Create Azure Resource Group, eg.:
az group create --name WebApiWithNETCore --location westus
-
Create Azure Container Registry, eg.
az acr create --resource-group WebApiWithNETCore --name dockercontainerregistrysample --sku Basic
-
Setup service connection in Azure Devops. See more in documentation
Use defined stage template and define needed variables, eg.:
variables:
# image version (tag) variables
major: 1
minor: 0
patch: 0
build: $[counter(variables['minor'], 0)] #this will reset when we bump patch
tag: $(major).$(minor).$(patch).$(build)
vmImageName: 'ubuntu-16.04'
dockerfilePath: CD/DockerContainerRegistry/DOCKERFILE
imageRepository: dockercontainerregistrysample
dockerRegistryServiceConnection: AzureDockerRegistry
stages:
- template: AzureDevOps/Stages/BuildAndPublishDocker.yml
parameters:
imageRepository: imageRepository
dockerRegistryServiceConnection: dockerRegistryServiceConnection
tag: tag
vmImageName: vmImageName
dockerfilePath: dockerfilePath
See more in the pipeline definition: link.
Links:
Before running the pipeline, you need to manually using Azure Cloud Shell
:
- Create an account and sign in to Docker Hub.
- Create repository (this will be your image name) selecting your Git repository.
- Setup service connection in Azure Devops. See more in documentation
Use defined stage template and define needed variables, eg.:
variables:
# image version (tag) variables
major: 1
minor: 0
patch: 0
build: $[counter(variables['minor'], 0)] #this will reset when we bump patch
tag: $(major).$(minor).$(patch).$(build)
vmImageName: 'ubuntu-16.04'
dockerfilePath: CD/DockerContainerRegistry/DOCKERFILE
imageRepository: oskardudycz/dockercontainerregistrysample
dockerRegistryServiceConnection: DockerHubDockerRegistry
stages:
- template: AzureDevOps/Stages/BuildAndPublishDocker.yml
parameters:
imageRepository: imageRepository
dockerRegistryServiceConnection: dockerRegistryServiceConnection
tag: tag
vmImageName: vmImageName
dockerfilePath: dockerfilePath
- Microsof Documentation - Quickstart: Use an Azure Resource Manager template to deploy a Linux web app to Azure
- Microsoft Documentation - Azure Resource Group Deployment Task
- AzureDevOps documentation - Service connections
- StackOverflow - Entity Framework Migrations in Azure Pipelines
- Azure DevOps Labs - Deploying a Docker based web application to Azure App Service
- Chris Sainty - Deploying Containerised Apps to Azure Web App for Containers
- Microsoft Documentation - Run a custom Windows container in Azure
- Barbara 4bes - Step by step: Setup a CICD pipeline in Azure DevOps for ARM templates
Before running the pipeline:
- Create an account and sign in to Docker Hub.
- Go to Account Settings => Security: link and click New Access Token.
- Provide the name of your access token, save it and copy the value (you won't be able to see it again, you'll need to regenerate it).
- Go to your GitHub secrets settings (Settings => Secrets, url
https://github.com/{your_username}/{your_repository_name}/settings/secrets/actions
). - Create two secrets (they won't be visible for other users and will be used in the )
DOCKERHUB_USERNAME
- with the name of your Docker Hub account (do not mistake it with GitHub account)DOCKERHUB_TOKEN
- with the pasted value of a token generated in point 3.
Then add new file in the .github/workflows
repository folder - e.g. build_and_publish_docker_to_docker_hub.yml.
name: Build And Publish Docker To DockerHub
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check Out Repo
uses: actions/checkout@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
# Use secrets defined in GithubRepository
# Based on the generated in DockerHub token
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
# build image in pull requests
# publish only if branch is `main`
push: ${{ github.ref == 'refs/heads/main'}}
# define at which tag should be docker image published
tags: oskardudycz/webapi_net_core_github_actions:latest
# path to your project subfolder
context: ./CD/DockerContainerRegistry
# path to Dockerfile
file: ./CD/DockerContainerRegistry/DOCKERFILE
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
- Docker - Configure GitHub Actions
- Docker Blog - Ben De St Paer-Gotch - Docker Github Actions
- GitHub - Publishing Docker images
- GitHub Actions MarketPlace - Build and push Docker images