Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Azure DevOps build/release pipeline best practices and artifacts to reproduce it #949

Closed
SychevIgor opened this issue Feb 26, 2019 · 24 comments

Comments

@SychevIgor
Copy link
Contributor

SychevIgor commented Feb 26, 2019

First of all - thank you for this project and books. It's cool to have it all together and instead of each time send to a team million links- send just one.

But what probably missed from the microservices book and the sample- is devops process for microservices. Not trivial -deploy one app (from 10+)

As "Evangelists" from Microsoft DevOps constantly talking - "Microservices can't be used without devops effectively". I'm agree with it, but this (devops) topic not covered in a book yet. as well as in a project.

Why it's an issue/gap? for example - in all googlable samples (both in azure devops documentation/devops sample generator v2) we can find only trivial examples- how to deploy 1 app (even if we are deploying app to an aks via helm). But in eshop- you got 10+ different apps/microserives and it's unclear hot to trigger a deployment of modified microsoervces only. As well as some other questions related to devops that can affect overall adaptation of microservices.

If you will add to the book details about best practices, how best practices can be implemented using azure devops pipelines, maybe even artifacts in a repo to reproduce mentioned pipelines in our own projects- it will significantly simplify everything for us(customers/.net devs), because we will have some basic references without weeks of of googling/research and so on.

Maybe you will even add this project to azure devops generator. https://docs.microsoft.com/en-us/azure/devops/demo-gen/?view=azure-devops
What's you opinion about it?
@CESARDELATORRE @unaizorrilla @mvelosop

@SychevIgor
Copy link
Contributor Author

P.S. my question is different from this #804 ("plan to host these solution in Azure DevOps?"), when my question is about best practices/artifacts.... not about - where you will host this app.

@SychevIgor
Copy link
Contributor Author

Also in a book was phrase
"What this guide does not cover
This guide does not focus on the application lifecycle, DevOps, CI/CD pipelines, or team work. The
complementary guide Containerized Docker Application Lifecycle with Microsoft Platform and Tools
focuses on that subject. The current guide also does not provide implementation details on Azure
infrastructure, such as information on specific orchestrators."

But even in complementary guide - didn't mentioned "helm" or deployment of "micro services application" (multiple different apps under one umbrella.
It's clear how to deploy 1 application, but not clear how to do it for a full microservices application like eshop.

@mvelosop
Copy link
Collaborator

mvelosop commented Mar 4, 2019

Hi @SychevIgor, yes we are aware of this situation and there's someone already working in an Azure DevOps pipeline showing best practices. We’ll show it in upcoming versions of eShopOnContainers and guidance eBooks.

Hope this helps.

cc/ @nishanil

@nishanil
Copy link
Contributor

nishanil commented Mar 5, 2019

Thanks @SychevIgor for the detailed feedback.

Yes, @eiximenis is working on the individual DevOps pipeline for microservices. We will fix the book and the documentation once we have them ready.

@maurei
Copy link

maurei commented Mar 14, 2019

Yes, @eiximenis is working on the individual DevOps pipeline for microservices. We will fix the book and the documentation once we have them ready.

Thanks, @nishanil, for confirming this is being worked on. I am currently struggling with this myself.
Is there any ETA on this?

I would like to put forward the main issues I'm facing, which are currently not solved in the DevOps setup (or maybe I'm just not seeing it - really looking forward to this part of the ebook!), and see if you guys (@eiximenis) are planning on covering this in the next edition of the book.

CI: upon commit to master, a method to test and build only the affected projects:

  1. If changes are made in a (set of) microservice(s), process only these specific microservices.
    • I've been playing around with git diff to identify the microservices that need to be rebuilt and redeployed, as inspired by this example.
  2. If changes are made in common code (BuildingBlocks in the showcase project), build the shared code AND build+redeploy only the (deployable) projects that depend on this shared code.
    • I've been thinking about parsing .csproj files to map project dependencies dynamically and figure out which to rebuild. Seems doable, but might not be most efficient way.
    • Other resources suggest that this can be accomplished "implicitly" by efficient usage of Docker layer caching (even when using serverless build hosts), see this article. My Docker knowledge is currently insufficient to be able to judge if this will actually solve the problem. Probably will need a combination of caching and git diffing to target the appropiate projects efficiently
  3. How to manage read/write permissions in a monorepo (use VFS?)

Furthermore,
4. In all of this, I'm assuming that the documentation will consider the case of a monorepo, as used in the showcase project. But this is not a priori evident: people debate a lot whether or not to use a monorepo. There even is a microsoft article that states: Especially for teams embracing microservices, multi-repo can be the right approach (yet this showcase uses a monorepo 🙊). Will this be discussed at all?
5. I cannot find the source of the documentation. I would love to see the draft/progress of the next edition, as it would probably be helpful already. Is this possible?

@mvelosop
Copy link
Collaborator

Hi @maurei,

Regarding the monorepo, it's definitely not the recommended approach for a microservices architecture, however for eShopOnContainers, as a showcase or architectural patterns it does make sense, because it's easier to explore and work with. You can read more in issue #921.

As for the first part of your question, is it really possible to detect the dependent microservices?

I might be missing something but, how would you identify dependencies from integration events?

@maurei
Copy link

maurei commented Mar 18, 2019

We have a project in the common layer that contains all integration events that are shared, and microservices that use these events have a project reference to this project, therefore this should be identifiable by parsing the .csprojs, if I'm not mistaken?

For integration events related to achieving data consistency across different microservices, we're using a generic integration event, something like DataConsistencyEvent<TResource>, so this makes sense to have it in a common layer: all updates to resources in our microservices web propagate in the same way (and we'd like to keep it that way: keeps it simple), relying on the same method of (de)serialization. In this sense it feels like an actual "building block" as apposed to a "shared library" that contributes to coupling between services. (Microservice-specific "calculated" fields can still be added locally and all that)

For more specific, (domain-type is the correct term here?) integration events, these are still typically shared by minimally two microservices, hence we still feel it makes sense to share them in a common project. About that: I've been reading through the issues and your discussions with @CESARDELATORRE , about it, thanks for that, pretty helpful. In our case we're a small team (2 - 4 developers) and we feel that the extra overhead of maintaining such shared integration events through nuget packages and allowing for different versions of these integration events across the microservices, adds unwanted complexity. Forcing ourselves to use "actual same" integration events everywhere might sometimes be a bit tedious when you need to change affected code in multiple places

  • but so is having to maintain a nuget package
    • and if in the future you need to use that extra field after all which is in a higher version of a nuget package, you'll still end up updating the affected code anyway. So (In some cases) maintaining a nuget is just a matter of delaying the update of that code that is affected (unsure if this is relevant?)
  • it is not too tedious as long as our team is small. No inter-dependency between teams, and feels a bit like premature optimisation to worry abut that already
  • the resulting code (and (devops) env to manage this code) will be less complex.

I'm looking forward to hearing your thoughts!

@mvelosop
Copy link
Collaborator

OK, @maurei, it looks like you have all that's needed to identify dependencies between microservices, although it quite clear it's a highly opinionated solution, that's nice and working for your team, but it's obviously tied to that way of structuring your app.

For that very same reason you'll probably have to find a solution that works in your specific setup, as I'm guessing @eiximenis is working on general cases.

But a usual, there's no one-size-fits-all solution, so if it works for your team, great.

Hope this helps.

@eiximenis
Copy link
Contributor

Hi everyone!

1st version of separated builds had been released into dev branch. We switched to YAML buillds so now we have the builds as code in the repo. You can find the build definitions in /build/azure-devops folder. Is a good starting point for people to see and play with the builds. Unfortunately Azure Devops still don't support YAML based releases :(

Anyway, currently all builds are triggered by every single push in our CI pipeline, which is far from best option.

About what @maurei said (using gif diff), as we use Azure Devops for the builds, maybe the use of path filters could be a better (meaning easier) option. If path filters are not enough we could start thinking on other options including what @maurei said. In eShopOnContainers we have the (auto imposed) requirement to be in a single repo, and this come with a price when creating CI/CD pipelines.

Also, one thing to notice (this is for @CESARDELATORRE and @nishanil): Switching from one monolithic build to N builds has great benefits (and its more microservice oriented) but as we are using the hosted agent we are paying a high price in the overall build time. Now each build is independent and runs in a fresh machine, so no Docker cache is reused between the builds. So every build has to download all the images including netcore SDK and runtime ones. I think that switching to a private build agent would allow us to reuse docker images and improve the overall build time.

thoughts?

@nishanil
Copy link
Contributor

nishanil commented Mar 20, 2019

Anyway, currently all builds are triggered by every single push in our CI pipeline, which is far from best option.

@eiximenis Try triggers with path in Yaml? https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml#paths

@eiximenis
Copy link
Contributor

Hi!
I've updated the yaml builds with triggers and path filters. So, now only the impacted microservices are built on each push.
It is not perfect, but I think it is enough for our case.

@eiximenis
Copy link
Contributor

Things that could be improved:

  • Currently change on helm charts trigger the build which builds docker image (not needed) and publishes the helm chart as a build result (needed) for allowing release to deploy. A custom build only for helm chart could be added. This build should do nothing but publishing the new chart to release.
  • Some changes on infrastructure helm charts do not trigger any build. Same as before, a new "infrastructure" build could be created for allowing re-deployment of these new helm charts.

@maurei
Copy link

maurei commented Mar 26, 2019

@eiximenis regarding your note about the absence of Docker caching of layers when using fresh machines every time, I think this article provides a feasible approach to solve that issue.

@mvelosop
Copy link
Collaborator

mvelosop commented Mar 27, 2019

@maurei, that article looks quite interesting, will explore in detail.

I've been doing some experiments to speed up building (@ ~22 min/build * build frecuency = a lot of time)

I have only tested this with docker-compose, and have taken build time down to ~14 min (~36% less).

The general approach is to pre-restore commonly used packages in the packages folder in the solution before building, so packages are included in the build context and most of the package restore time is saved.

That's kind of "priming the packages cache".

But then copying the context at the beginning of the build takes longer.

To achieve this, Dockerfile have to be something like this:

FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80

FROM microsoft/dotnet:2.2-sdk AS publish
WORKDIR /src
COPY . .
WORKDIR /src/src/Services/Ordering/Ordering.API
RUN dotnet restore --packages /src/packages
RUN dotnet publish --no-restore -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Ordering.API.dll"]

BTW, I dropped the dotnet build... line since, AFAIK, these two lines are redundant:

RUN dotnet build --no-restore -c Release -o /app
RUN dotnet publish --no-restore -c Release -o /app

Could this help somehow?

@eiximenis
Copy link
Contributor

@mvelosop

The general approach is to pre-restore commonly used packages in the packages folder in the solution before building,

I don't like this approach because docker build process should be independent of any state of the docker host. In fact (imho) the packages folder is (like the node_modules) one candidate to be in the .dockerignore file.

If the build process happens inside a Docker container, entire build process should happen on it: restoring dependences is part of that build process, and therefore should happen inside the build container. Unless you consider that the package restore is not part of the build process (and in this case you should have packages (and node_modules) pushed in the repo).

@mvelosop
Copy link
Collaborator

Yeah @eiximenis, I agree on the the host state point.

What I like, or rather, don't dislike too much, is that it doesn't really matter if the packages folder is empty or outdated, in the worst case all packages will have to be restored, but I think it's kind of innocuous, because it might help but it "shouldn't" cause any harm.

There's another approach I was exploring with @WolfspiritM some time ago in #650. It's restoring packages for all projects before building, along with some tweaks in .dockerignore:

FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY . .
RUN dotnet restore
WORKDIR /src/src/Services/Some.Project
RUN dotnet build --no-restore -c Release -o /app

This is pretty fast to build the whole project, but twice as long to build a single container.

That's why I got to the "primed packages cache" solution, that's something in-between.

@mvelosop
Copy link
Collaborator

mvelosop commented May 8, 2019

Hi @SychevIgor, have you taken a look at https://github.com/dotnet-architecture/eShopOnContainers/tree/dev/build/azure-devops regarding this issue?

Do you thing it's good enough or are you missing something?

@SychevIgor
Copy link
Contributor Author

@mvelosop it would be nice to add tests run, tests code coverage results and publish results. Here is our example:

  • task: DockerCompose@0
    displayName: 'Run Tests'
    inputs:
    dockerComposeFile: '$(DockerComposeTestPath)'
    additionalDockerComposeFiles: '$(DockerComposeTestOverridePath)'
    action: 'Run services'
    detached: false
    abortOnContainerExit: false
    requireAdditionalDockerComposeFiles: true
    condition: or(and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/develop')), eq(variables['Build.Reason'], 'PullRequest'))

  • task: PublishTestResults@2
    displayName: 'Publish Test Results Back'
    inputs:
    testResultsFormat: VSTest
    testResultsFiles: '*.trx'
    searchFolder: '$(Build.ArtifactStagingDirectory) '
    condition: or(and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/develop')), eq(variables['Build.Reason'], 'PullRequest'))

  • task: PublishCodeCoverageResults@1
    displayName: 'Publish code coverage'
    inputs:
    codeCoverageTool: Cobertura
    summaryFileLocation: '$(Build.ArtifactStagingDirectory)/**/coverage.xml'
    condition: or(and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/develop')), eq(variables['Build.Reason'], 'PullRequest'))

@SychevIgor
Copy link
Contributor Author

SychevIgor commented May 8, 2019

@mvelosop also, we separated docker images build and helm chart publish to 2 different Build pipelines, because we don't want to increment image versions, if code didn't change or helm chart versions of charts didn't change. it also speedup the build process.

In a release pipelines, we choose both "new image" and "helm chart" as triggers.

We are actively using variables, (for example for docker-compose files), because it's easier to change variable than build definition.

In our team, we are not running tests if it's not pullrequest or develop branch to speedup process. test runs on build agent are slow.

@SychevIgor
Copy link
Contributor Author

@mvelosop we also removed *."api" from image name, because later in release, we can't use the same naming convention (helm can't create deployment when name containers ". or :"

@SychevIgor
Copy link
Contributor Author

@mvelosop forgot about helm charts... Because we are writing code not only for our self (outsourcing) - it's bad idea to store helm charts only as build output in azure devops.
As of today, we are pushing helm charts to ACR. Now, if we will stop developing app and fully transfer code to customer (and for example drop our azure devops account) - operation people, still will have a chance to deploy app using helm chars from ACR repo

@mvelosop
Copy link
Collaborator

Hi @SychevIgor, thanks for such a detailed real-world tips! highly valuable 😊

Pinging @nishanil and @eiximenis on this to check on the best way to incorporate this!

Thanks!

@SychevIgor
Copy link
Contributor Author

@mvelosop on build2019 MS announced unified pipelines... pipelines based on yaml. https://devblogs.microsoft.com/devops/whats-new-with-azure-pipelines/ probably you will later can introduce CI+CD

@maurei
Copy link

maurei commented Jan 13, 2020

I have only tested this with docker-compose, and have taken build time down to ~14 min (~36% less).

@mvelosop I was wondering how long it currently takes to run the entire CI pipeline?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants