Targeting multi-processor architectures with Azure DevOps

Summary

This post covers the setup of building container images with Azure DevOps that target x64 and ARM processor architectures and pushes them to Docker Hub registry. This is a continuation of the post, Docker Blazor App on Linux ARM, to support development on x64 and deploy to both x64 and ARM architectures. As described in the previous post, building docker images that targets different processor architectures like x64, ARM, and macOS provides runtime flexibility. However, the build process can be time consuming without automation which will be addressed in this post. Let’s start by reviewing the process for manually targeting x64 and ARM with the following design:

Integration between Windows & Linux Containers on x64 & ARM architectures
Integration between Windows & Linux Containers on x64 & ARM architectures

Although the source code remain the same, the setup required us to have to manually build the application for each architecture and push the build images to Docker Hub. Dual build and pushes.

This post will evolve the above manual process to an automated process using the following design.

Azure DevOps build automation for multi-processor architectures
Azure DevOps build automation for multi-processor architectures

Prerequisites

Description

To distribute an application for use, we need to take a version of the source code and build it on a system processor architecture that it is targeted to run on.

Source Repo=>x64 Build Machine (to run on x64 machine)

If we want our application to run on other architectures we need to repeat the build on each build machine that matches the target runtime architecture.

Source Repo=>x64 Build Machine (to run on x64 machine)
=>macOS Build Machine (to run on macOS machine)
=>ARM Build Machine (to run on ARM machine)

As there is no application code changes between the builds, the activities to build across the multiple architectures is similar and can become tedious to repeat manually.

Time has shown us that we can make and sustain application stability faster if we build and test our code changes incrementally as they occur vs. waiting for multiple changes to have been applied. This means we need to build on each source repo update referred to as Continuous Integration (CI). This best practice introduces a significant increase in build numbers that automation becomes a necessity.

To help regain the time performed in orchestrating each architecture build, we can automate our manual activities by introducing a system orchestrator to act on our behalf.

Source Repo<=Orchestrator=>x64 Build Machine
<=Orchestrator=>macOS Build Machine
<=Orchestrator=>ARM Build Machine

We have now automated our manual process, but what happens if a 2nd release has to be made before we have finished our first one? We cannot build two (2) releases, r1 & r2, on the same build machine at the same time. This means our orchestrator must hold r2 in a queue until r1 has finished and the build machine becomes available to take the 2nd release.

Source Repo<=OrchestratorQueue (r2)=>x64 Build Machine (r1)
<=OrchestratorQueue (r2)=>macOS Build Machine (r1)
<=OrchestratorQueue (r2)=>ARM Build Machine (r1)

Queuing is a great option if we have time, but if we want the builds to happen faster we can have r1 and r2 build at the same time by introducing a second build machine for each architecture. This will in effect create a pool of two (2) Build Machines per architecture.

Source Repo<=OrchestratorQueue (0)=>x64 Build Machine (r1)
=>x64 Build Machine (r2)
<=OrchestratorQueue (0)=>macOS Build Machine (r1)
=>macOS Build Machine (r2)
<=OrchestratorQueue (0)=>ARM Build Machine (r1)
=>ARM Build Machine (r2)

If we get three (3) releases, r1, r2, r3, to build at the same time we can now choose to stay with the current configuration and let r3 wait in the queue while our pools each build r1 or r2. Or, we can add a 3rd build machine to each pool so we can get 3 builds to run concurrently.

This is conceptually the setup that we will use to make a build that target’s multiple architectures automatically. We will cover how these concepts are implemented with Azure DevOps, but they can also be implemented in other toolsets like Atlassian Bamboo orJenkins.

Let’s map the above concepts into Azure DevOps terms.

Concept TermAzure DevOps Term
Source Repo=>Repos
Orchestrator=>Pipeline (design & runtime)
Queue=>Jobs
Build Machine=>Build Agent
Pool=>Agent Pool

Azure DevOps includes two (2) additional terms that was not defined in our conceptual model but that will be utilized in the coming steps.

Self Hosted AgentRefers to a build machine that we are hosting ourselves
Microsoft Hosted AgentRefers to a build machine that Microsoft is hosting

There is no difference in the build machine concepts, but the self-hosted version can be thought of as a Bring-Your-Own-Build-Machine (BYOBM) vs. Microsoft-hosted is provided by Microsoft. They each come with flexibility and convenience that will be highlighted.

The final part of the orchestrator responsibility is to tell the build agents to push the docker images to the Docker Hub registry. This is performed by the Azure Pipeline when it sends the job activities to the build agents.

The following steps will go through this process targeting x64 and ARM architectures.

Steps

1. Navigate to the root organization of the Azure DevOps instance. In this case https://torben.visualstudio.com and https://dev.azure.com/torben are synonymous.

Azure DevOps organization settings

2. Select ‘Agent Pools’ in left navigation pane and default agent pools should be displayed.

Azure DevOps organization agent pools

3. Click ‘Add Pool’ and select ‘Self-hosted’ as pool type. Enter ‘Arm32’ to indicate the architecture the build agent(s) will support and remember the name for later use in configuring the build pipeline. Description is optional for the pool. Click ‘Create’ to create the pool for holding build agent instances.

Azure DevOps add agent pool for Arm32

4. Click the ‘Arm32’ agent pool just created to open it.

Azure DevOps Arm32 agent pool created

5. Select the ‘Agents’ tab and click ‘New Agent’ to configure the first build agent.

Azure DevOps Arm32 agents

6. In creating the agent we are downloading the agent application to install on the build machine, in our case Raspberry Pi w/ ARM 32-bit. Select the ‘Linux’ tab, click ‘ARM’ in the left navigation. Instead of downloading the file, we will get the URL to pull it later from the device. Click the ‘Copy’ button to the right of the ‘Download’ button to copy the URL to the clipboard. Save it for later reference. Take a screenshot or make note of the installation instructions to create and configure the agent on the build machine.

Azure DevOps download Linux ARM32 agent

7. Before installing the agent, we need to also get credentials, an access token, that will allow the agent to contact Azure DevOps. This is required because instead of Azure DevOps connecting to each agent, each agent will instead connect to Azure DevOps and register itself to an Agent Pool.
Click the ‘User Settings’ next to the user profile and select ‘Personal access tokens’

Azure DevOps User Settings

8. Click ‘New Token’ on the Personal Access Tokens screen.

Azure DevOps personal access tokens

9. Enter a name that describes what this access token will be used for. In this example we use ‘<raspberry pi name> <agent pool name> build agent’ for a naming convention that will, hopefully, help us remember that this token is used on a build agent and we can identify the machine and pool it was assigned.
Select the organization that this token will have access to, in this case ‘torben’.
Security best-practices ensure that we rotate the access token periodically and provides the option for a ’30/60/90/custom’ day interval. In this example, a custom date is selected for ‘7/15/2021’ where the token will expire.

Azure DevOps create a new personal access token name configuration

10. Click ‘Show all scopes’ and check ‘Read & Manage’ under the Agent Pools scope. Click ‘Create’ and copy the access token for use in configuring the build agent.
Note: keep the access token in a safe/secret place.

Azure DevOps create a new personal access token.

11. We should now see our newly created Personal Access Token (PAT) and are ready to put it to use with our build agent.

Azure DevOps personal access token created

12. Log into Raspberry Pi and execute the following commands.

$ mkdir ado
$ cd ado
$ wget https://vstsagentpackage.azureedge.net/agent/2.171.1/vsts-agent-linux-arm-2.171.1.tar.gz
$ tar -zxvf vsts-agent-linux-arm-2.171.1.tar.gz
$ ls -l
Azure DevOps build agent uncompressed

13. Execute the config command to configure the agent.

$ ./config.sh

14. In the Enter Server URL, type ‘https://dev.azure.com/<organization name>’ and hit enter.
In the Enter authentication type, hit enter for defaulting to Personal Access Token (PAT).
In the personal access token, paste the token copied in step 10 and hit enter.

Azure DevOps agent installation

15. In the Enter agent pool, enter ‘Arm32’ (our Step 3 Agent Pool name) and hit enter.
In the Enter agent name, enter <machine name> (k8spigreen) and hit enter.

Azure DevOps agent configuration for agent pool

16. In the Enter work folder, hit enter (taking the default _work) or enter folder name.

Azure DevOps build agent installed

17. Execute the run command to start the agent now that configuration is finished.

$ ./run.sh
Azure DevOps build agent running on Arm32

18. Go back to the Agent Pools under the Organization Settings in Azure DevOps and we should now see our new build agent under the ‘Arm32’ pool, indicating that it is ‘Online’ and ready to take build jobs.

Azure DevOps organization Arm32 pool agents available

So far so good, we have completed the Cloud-to-On Premise configuration for Azure DevOps to talk to our Raspberry Pi running ARM 32-bit build machine. One area to highlight is that we did not have to open a firewall port for Azure DevOps to reach the build agent on our local network. Instead, the build agent connects securely outbound to Azure DevOps and holds that connection to wait for build jobs getting queued up remotely that it can pull down and build. This design helps lower the hosting complexity of the build agent while keeping the connection secure.

For the Linux x64 architecture build we could follow the same process for creating a self-hosted build agent based on that architecture. However, an alternative option that we will use in this example is to use the Microsoft-hosted build agent provided by Azure DevOps. Easy, the second processor architecture build machine is done 🙂

Now that we have the two (2) build-machines ready we have just one more pre-requisite step before we can configure and run our continuous integration (CI) pipeline. After the docker images are build we want to get them published to a docker registry. In this post, we are using Docker Hub as the registry so we need to configure a connection for Azure DevOps to tell our build agents how to connect to the Docker Hub repository and push the build images.

The Azure DevOps to Docker Hub connection is created via a Service Connection in Azure DevOps.

19. In the Azure DevOps project, go to Project Settings and select Service Connections. Click ‘New Service Connection’ and select Docker Registry.

Azure DevOps new service connection to Docker Registry

20. Set the registry type to Docker Hub and enter the Docker Hub ID. Password & Email are optional.

Azure DevOps configure service connection to Docker Hub

21. Enter a Service connection name that will be used to reference this connection in the pipeline. In this example we name it ‘DockerHub’. Click “Verify and save”.

Azure DevOps configure service connection name to Docker Hub

22. The DockerHub service connection should now be defined and available for pipelines within the project.

Azure DevOps service connection to DockerHub created

Done! We are now ready to put all the steps together in the final build pipeline.

For brevity, the steps for creating the pipeline has been skipped and jumps into the configuration of the multi-processor build configuration.

23. In Azure DevOps, open the Project, select Pipelines and open the pipeline YAML file that should be configured for multi-processor builds.

Azure DevOps Docker pipeline task

24. Configure the Build stage to contain two (2) parallel jobs for building both a Linux64 (Microsoft Hosted) and an ARM32 (Self Hosted) docker image and publish them to the Docker Hub registry.

# Docker
# Build a Docker image 
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker

trigger:
- master

resources:
- repo: self

variables:
  tag: '$(Build.BuildId)'

stages:
- stage: Build
  displayName: Build image for Linux64 and ARM32
  jobs:  
  - job: BuildLinux64
    displayName: Build Linux64 on Microsoft Hosted
    pool:
      vmImage: 'ubuntu-latest'
    steps:
    - task: Docker@2
      displayName: Build image for Linux64 on Microsoft Hosted
      inputs:
        containerRegistry: 'DockerHub'
        repository: 'torbenp/helloblazor'
        command: 'buildAndPush'
        dockerfile: '$(Build.SourcesDirectory)/Hello-Blazor/Hello-Blazor/Dockerfile'
        buildContext: '$(Build.SourcesDirectory)/Hello-Blazor/'
        tags: |
          linux64.$(tag)
          linux64

  - job: BuildArm32
    displayName: Build ARM32 on Self-Host
    pool:
       name: Arm32
    steps:
    - task: Docker@2
      displayName: Build image for ARM32 on Self-Host
      inputs:
        containerRegistry: 'DockerHub'
        repository: 'torbenp/helloblazor'
        command: 'buildAndPush'
        dockerfile: '$(Build.SourcesDirectory)/Hello-Blazor/Hello-Blazor/DockerfileArm32'
        buildContext: '$(Build.SourcesDirectory)/Hello-Blazor/'
        tags: |
          arm32.$(tag)
          arm32

There are a few configuration items to call out based on our build agent and service connection configurations:

The Job names must be unique to differentiate them but can be named anything, in this case we name by the build processor type:

  - job: BuildLinux64
  - job: BuildArm32

The pools refer to the “Agent Pools” to be used to build the job.
‘vmImage’ refers to the Microsoft Hosted pool where we will be building on the latest Ubuntu version provided by Azure DevOps.
‘name’ refers to self-hosted, and we provide the pool name created in Step 3.

pool:
  vmImage: 'ubuntu-latest'
pool:
  name: Arm32

The ContainerRegistry defines the name of the ServiceConnection to the Docker Hub registry that we want the Docker Image pushed to after the build succeeds. The target registry repo will be the same for both architectures.

containerRegistry: 'DockerHub'

To differentiate the architecture images from each other, we use two (2) tags per processor architecture. One tag will define the architecture and build number (linux64.buildNumber and arm32.buildNumber). The second tag defines only the architecture and will constitute the latest build for that processor architecture (linux64 and arm32).
Although there are two (2) tags per image, they will both refer to the same image when pushed to the Docker repo. The build numbered version, processor.buildNumber, is considered the immutable version and will not change from build to build. Instead, as the build number changes between builds new image versions will be pushed to the repo. The processor only tag is mutable and will be overwritten for each build so it always refers to the latest image that was build.

        tags: |
          linux64.$(tag)
          linux64
        tags: |
          arm32.$(tag)
          arm32

25. Click ‘Run’ to start the pipeline build manually.

Azure DevOps run pipeline

26. View the build summary for Job status as it will start in the ‘Queued’ state waiting for build agents to accept the job.

Azure DevOps builds queued for agents

27. Once a build agent becomes available it will dequeue the job request and start building which is indicated by the ‘running’ state.

Azure DevOps builds running on agents

28. Assuming no configuration or compilation errors we should end up in a succeeded state for each job.

Azure DevOps pipeline build completed

29. If we look at the Raspberry Pi console we can see it dequeued from Azure DevOps and ran it successfully on the self-hosted build agent.

Azure DevOps build agent on Arm32 device succeeded

30. The final verification is going to the Docker Hub repo to see the two (2) tags that have been created per processor architecture. Notice the build number, 233, for the immutable tags are identical for each processor architecture so we can identify the pairs as additional builds occur.
The push times are 1 minute apart per architecture which is caused by the builds happening on different build agents. The Linux64 being posted 1 minute behind the Arm32 is likely caused by the Microsoft hosted infrastructure being shared across several projects and availability of an agent, whereas the Self hosted infrastructure is dedicated and idle.

Docker Hub verification that builds were pushed

Next Steps

Several steps can be taken from this point including:

  1. Enable Continuous Integration on the Pipeline – will ensure the build agents are triggered automatically upon a code commit to the source repo.
  2. Add Agents to the Arm32 pool – if more Raspberry Pi’s are required to keep build agents available based on increased build demand.
  3. Add macOS pool – to target a 3rd processor architecture. A third job can be added to the build stage pipeline similar to the Arm32 setup outlined in this post.
  4. Run Agent as a Service – instead of performing step 17 with an interactive ‘run’ command the agent comes with a script for installing it as a service so we can have it running continuously. To install as a service, run the following commands instead of Step 1. (replace ‘vsts.agent.torben.Arm32.k8spigreen.service’ with your servicename)
$ sudo ./svc.sh install
$ sudo systemctl start vsts.agent.torben.Arm32.k8spigreen.service

//Verify service is running (optional)
$ sudo systemctl status vsts.agent.torben.Arm32.k8spigreen.service
Terminal window of install build Agent as a service and running it

As seen in Step 3, build agents are available for many operating systems and processor architectures. The decision between Self-hosted vs. Microsoft hosted is primarily driven on build requirements and availability within the Microsoft hosted landscape. The reduced setup and maintenance requirements makes the Microsoft hosted instance the ideal starting point with the Self hosted option for greatest flexibility.

Azure DevOps Pipeline: COPY failed: stat /var/lib/docker/…: no such file or directory

Summary

Running a docker build task in an Azure DevOps pipeline causes no such file or directory error.

...
Step 7/26 : COPY ["Hello-Blazor/Hello-Blazor.csproj", "Hello-Blazor/"]
COPY failed: stat /var/lib/docker/tmp/docker-builder890404231/Hello-Blazor/Hello-Blazor.csproj: no such file or directory
##[error]COPY failed: stat /var/lib/docker/tmp/docker-builder890404231/Hello-Blazor/Hello-Blazor.csproj: no such file or directory
##[error]The process '/usr/bin/docker' failed with exit code 1
Finishing: Build an image

The error occurs when the pipeline task build context does not match the build context defined in the Dockerfile. By default it assumes the Dockerfile directory which is not always the case for Visual Studio generated solutions/projects. A buildContext attribute can be added to the pipeline task to explicitly set the build context directory required by the Dockerfile.

... 
buildContext: '$(Build.SourcesDirectory)/Hello-Blazor/'
...

Description

For the detailed cause of this error refer to http://torbenp.com/2020/07/copy-failed-no-such-file-or-directory/
The Azure Pipeline task needs to be explicitly configured to execute the docker build from the parent folder defined by the Dockerfile as it will otherwise assume the Dockerfile folder as the build context. The following shows the default pipeline task without an explicit build context.

...
    - task: Docker@2
      displayName: Build an image
      inputs:
        command: build
        dockerfile: '$(Build.SourcesDirectory)/Hello-Blazor/Hello-Blazor/Dockerfile'
        tags: |
          $(tag)
...

This can create a COPY build failure with the following details:

COPY error from Azure DevOps Pipeline
Azure DevOps Pipeline COPY Error

To make the parent folder of the Dockerfile the build context, add the ‘buildContext’ attribute to the pipeline task configuration.

...
    - task: Docker@2
      displayName: Build an image
      inputs:
        command: build
        dockerfile: '$(Build.SourcesDirectory)/Hello-Blazor/Hello-Blazor/Dockerfile'
        buildContext: '$(Build.SourcesDirectory)/Hello-Blazor/'
        tags: |
          $(tag)
...

COPY failed: stat /var/lib/docker/tmp/docker-builder392878620/Hello-Blazor/Hello-Blazor.csproj: no such file or directory

Summary

Running docker build within the Dockerfile folder causes no such file or directory error.

C:\Hello-Blazor\docker build --force-rm -t helloblazor:latest .
Sending build context to Docker daemon  2.653MB
...
COPY failed: stat /var/lib/docker/tmp/docker-builder392878620/Hello-Blazor/Hello-Blazor.csproj: no such file or directory

Move up one folder (parent folder of Dockerfile) to change the build context and provide path to the Dockerfile.

C:\docker build -f .\Hello-Blazor\Dockerfile --force-rm -t helloblazor:latest .

Description

When building a Docker image from a Visual Studio generated Dockerfile, such as a .NET Core project, the error ‘no such file or directory’ can occur if the right build context is missing. In Visual Studio generated solutions and projects, the Solution file (.sln) is often in a parent folder to the Project files and their individual Dockerfiles. This makes the Visual Studio solution folder the root build context to build all projects and this build context is assumed in the compiler generated Dockerfile for projects.

C:.
+---Hello-Blazor.sln    //Build Context
+---Hello-Blazor
|   +---Hello-Blazor.csproj
|   +---Dockerfile

The autogenerated Dockerfile for multi-staged builds will show the solution folder build context in the COPY and RUN operations.

...
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Hello-Blazor/Hello-Blazor.csproj", "Hello-Blazor/"]
RUN dotnet restore "Hello-Blazor/Hello-Blazor.csproj"
...

To build the Docker image, use the solution folder as the build context (current folder) and use the -f parameter to pass the Dockerfile path instead of building directly from the Dockerfile folder.

C:\docker build -f ./Hello-Blazor/Dockerfile --force-rm -t helloblazor:latest .