Docker Blazor App on Linux ARM

Summary

Blazor provides the ability to create client web apps in C#. It has both server and client runtime options which allow the developer to choose the runtime location that provides the best end-user experience. The server solution utilizes a model-view-controller (MVC) pattern with a persistent WebSocket connection for event processing. This solution lends itself well for creating Single-Page-Applications (SPA) that are heavy in backend compute/storage processing but uses less data for user view/action. For example, running analytics on the backend and visualizing results on the client. Alternatively, the client runtime model utilizes WebAssembly to offload the compute to the client as a Progressive Web Application (PWA) with offline support. This solution is great for high-performance visuals where the client-server latency could adversely impact the user experience, such as in gaming.

This post will create of a Hello-World example of a Blazor Server application. Specifically, how to package the application in a Docker image and deploy it to a Raspberry Pi running Linux on ARM. The development environment will be Visual Studio running Windows on x64 which introduces the following three (3) areas to explore:

AreaDifferencesDescription
Operating SystemWindows vs. LinuxDesign-time is Windows targeted for runtime on Linux
Memory Address32 vs. 64-bitDesign-time is 64-bit operating system targeted for runtime on 32-bit operating system
Architecturex64 vs ARMDesign-time is x64 architecture targeted for runtime on ARM architecture

A few years ago this would likely have been deemed too complex and unmaintainable to sustain and forced a move to an alternative implementation. However, utilizing two (2) on-premises systems (x64 & ARM) along with two (2) cloud services, this scenario can be accomplished ‘fairly’ easily with the following end-state architecture.

Integration between Windows & Linux Containers on x64 & ARM architectures
Docker Blazor App designed on Windows x64 and target deployment on Raspberry Pi ARM 32-bit.

Prerequisites

Description

The first question to answer is why? Building an application to run across multiple operating systems, 32-and-64 bit memory addresses, and different processor architectures sounds more like a research project than any real-life use-case. One answer is the ability in delivering solid code as a developer (yes, there are more use-cases but this post will focus on the developer ability to run and test server-based applications).

When building applications on a laptop targeted for a multi-server runtime environment it can become challenging to verify if the application behaves as expected when considering variables such as network latency, process parallelization, etc. Having a dev-cluster helps mitigate this and having one per developer provides the greatest change isolation. The public cloud has made it convenient and flexible to provision these types of environments quickly. Alternatively, with the introduction of Raspberry Pi and Single Board Computing (SBC) devices it is now possible to create a private cloud with a fairly low capital investment. Thus, the developer loop we will create is as follows:

Developer Laptop (Client)
– Local Design-and-Runtime
– Windows 10 (64-bit)
==>Raspberry Pi (Server)
– Remote Runtime
– Linux Raspbian (32-bit)

Application portability becomes a key requirement for this setup to provide a like-for-like application deployment between the client and server environments. Containerization technologies, such as Docker, is the enabling solution for this requirement. Building an immutable image that can be instantiated easily on different machines while ensuring a consistent dependency configuration is a deployment productivity boost. Although Docker helps bridge the deployment packaging for portability, this scenario has two (2) elements that makes it a little more challenging.

Challenge #1 – Kernel dependency
Containerization is an application virtualization technology, but is still dependent on an operating system kernel in order to operate. This kernel is shared across all containers to minimize the container size and memory requirements which means the operating system hosting the container is the base kernel the application must be based on. This means building an application that is based on the Windows kernel will require a Windows host to operate. Vice versa, an application is based on the Linux kernel will require a Linux host to operate.

Thankfully, the Moby Project has helped solve the challenge of running Linux containers on a Windows operating system. This means we can switch the Windows 10 Docker Desktop engine to run Linux containers to verify the container can run locally on Linux before deployment to the server. In the past, this would have created additional dependency problems using the .NET Framework on Linux. But with the introduction of .NET Core, it now includes support for Linux so changing the operating system will not impact the application code.

Challenge #2 – Processor architecture dependency
Now the kernel dependency has been solved by utilizing a Linux kernel Docker image with .NET Core we run into a mismatch challenge in processor architectures. If we build the docker image on the Windows laptop based on a Linux amd64 image it will run fine on the client laptop. Unfortunately, running this image on the ARM architecture will cause an image format exception as the processor architectures are not compatible.

standard_init_linux.go:211: exec user process caused "exec format error"

To resolve this compatibility issue we need to have the base image based on the ARM architecture. However, without a hardware emulation package like QEMU, there is no ability to build the application with an ARM architecture image on the x64 architecture.

As we happen to have an ARM based device available, we can use it to build the application natively on the Raspberry Pi. This creates the following flows for building and publishing the application across the two environments.

Windows 10 LaptopbuildsLinux x64 image
Windows 10 LaptoppushesLinux x64 Image to Trusted Registry (Docker Hub)
Windows 10 LaptoppushesSource Code to Git Repo (Azure DevOps)
Raspberry Pi ServerpullsSource Code from Git Repo (Azure DevOps)
Raspberry Pi ServerbuildsLinux ARM32 image
Raspberry Pi ServerpushesLinux ARM32 image to Trusted Registry (Docker Hub)

The result will be a single code base with two (2) deployment packages targeting Linux x64 and ARM 32-bit.

Trusted Registry Linux versions for x64 and ARM architecture

Next are the steps to guide through this process.

Steps

1. Launch Visual Studio (Community or higher), Create a new Project and select ‘Blazor App’ in the template.

Blazor App template in Visual Studio

2. Select ‘Enable Docker Support’ in the options and ‘Linux’ as the target environment.

Blazor Server App with Docker Support on Linux

3. Once the project is open, navigate to the Pages folder and open ‘Index.razor’. Change the default code to the following to add a small amount of code that shows the runtime operating system, memory address, and architecture of the device.

@page "/"
@using System.Runtime.InteropServices;

<h1>Hello, world!</h1>

Welcome to your new app.
<p />
Running on: <b>@Environment.OSVersion (@GetOSBit()-bit on @GetArchitecture() architecture)</b>

<SurveyPrompt Title="How is Blazor working for you?" />

@code
{
    private int GetOSBit()
    {
        return Environment.Is64BitOperatingSystem ? 64 : 32;
    }

    private string GetArchitecture()
    {
        switch (RuntimeInformation.OSArchitecture)
        {
            case Architecture.Arm:
            case Architecture.Arm64:
                return "ARM";
            case Architecture.X64:
                return "x64";
            case Architecture.X86:
                return "x86";
            default:
                return "unknown";
        }
    }
}

4. Open the ‘DockerFile’ and verify that the multi-stage build is based on the Linux amd64 images.
Note: version number may be different but can be verified in Docker Hub for ASP.NET Core 2.1/3.1 Runtime and .NET Core SDK.

...
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
...

5. By default, Docker Desktop will be configured for Windows Containers. As we targeted Linux in the setup, click the docker icon in the system tray and ‘switch to Linux containers’.

Docker Desktop switch to Linux Containers

6. Build and Run the application as ‘Docker’ and we should see a screen similar to below that we are running on a Unix compatible operating system (Linux), 64-bit memory addressing and on an x64 architecture.

Blazor App running on Linux x64

7. Optional – to see multi-platform capabilities of ASP.NET Core, change the runtime environment from Docker to ‘IIS Express’ in Visual Studio and run the application again. Notice that the operating system now reflects running on Windows 64-bit without any code changes to the application.

Blazor App running on Windows x64 in IIS Express

8. We should now have a dev tagged image in our local docker images repository that can be re-tagged for the Docker Hub repo and pushed to upload to Docker Hub.

Docker Tag and Push image to Docker Hub repo
Docker Hub Repo with Linux64 tag

9. At this point, we can commit and push the source code to Azure DevOps as the source code management (SCM) system. For brevity, this step has been omitted and for the scope of this post could be substituted for any other SCM system, such as GitHub.

10. Let’s turn to our server, Raspberry Pi, and run the Docker image just build, helloblazor:linux64, to see what happens.
Note: the image is pulled successfully, but fails to run with the error “exec format error” due to the architecture mismatch.

Docker run with x64 image causes exec format error on ARM

11. To build the ARM architecture image, we will need to install git, clone the source, and update the Docker multi-stage image to be compatible with ARM 32-bit. The following commands can be executed on the Raspberry Pi.

$ sudo apt install git
$ git clone https://<scm repo path>
$ vim <path>/Dockerfile

12. Edit ASP.NET Core and .NET Core SDK images to use arm32v7. In this example, we comment out ‘#’ the previous Linux amd64 images and substitute with arm32v7 images.

...
#FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1.5-buster-slim-arm32v7 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

#FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
FROM mcr.microsoft.com/dotnet/core/sdk:3.1.301-buster-arm32v7 AS build
WORKDIR /src
...

13. Build the docker image and run it.

$ docker build -f ./Hello-Blazor/Dockerfile --force-rm -t helloblazor:arm32 .
$ docker run -d -p 80:80 helloblazor:arm32
Blazor App running on 32-bit Raspberry Pi ARM architecture

14. We have now completed and verified the recompilation for switching from a x64 64-bit architecture to an ARM 32-bit architecture. The final step remaining is to re-tag and push the new architecture image to Docker Hub repo.

$ docker tag helloblazor:arm32 torbenp/helloblazor:arm32
$ docker push torbenp/helloblazor:arm32
Docker Hub repo with images for x64 Linux and 32-bit ARM

Next Steps

As evidenced by the length of this post, this is a very long (and manual) process to repeat each time we need to make a build. This is where Azure DevOps come in to help automate the Continuous Integration (CI) pipeline of building both 32-and-64-bit versions with x64 and ARM architectures. This will be covered in an upcoming post for targeting multi-processor architectures with Azure DevOps.

For brevity, the post only covered deployment to a single Raspberry Pi instead of a full cluster. The actual runtime deployment of the ARM container image can be scaled into a full mini-cloud deployment by combining multiple Raspberry Pi’s into a container cluster using Kubernetes (K8s) or Swarm as an orchestration engine.

It is also worth mentioning that having a separate device for testing is not always required or needed (although it will likely improve the quality of software delivered). The alternative to having a physical Raspberry Pi device is to utilize hardware emulation with QEMU for multi-processor support.

Manage Kubernetes with Visual Studio Code

Summary

This is a walkthrough to setup Visual Studio Code (VSCode) to manage an external Kubernetes cluster. In this example, VSCode will be connected to a Kubernetes Cluster running on Raspberry Pi. The VSCode extension provides visual object navigation of the cluster and terminal execution of kubectl to modify cluster objects.

Prerequisites

Description

The following steps will reference a Linux Kubernetes cluster as the source system and a Windows environment as the target system. For these steps there is an assumption that the Kubernetes configuration file (kubeconfig) has been added to the user profile of the master node on the Linux cluster.

pirate – is the Linux user profile (based on the Hypriot distribution)
192.168.0.20 / k8spiblue – is the Linux master node IP / hostname
192.168.0.21 / k8spiblack – is a Linux worker node IP / hostname
192.168.0.22 / k8spiyellow – is a Linux worker node IP / hostname
torben – is the Windows user profile

Steps

From the Windows target system, copy the Kubernetes configuration file (kubeconfig) from the Linux user profile to the Windows user profile.

C:\Users\Torben>scp pirate@192.168.0.20:/home/pirate/.kube/config ./.kube/config

Enter a password for the user (pirate) when prompted and perform a directory listing for verification

Copy kubeconfig

Launch VSCode, click Extensions on the Activity bar and search the Marketplace for ‘Kubernetes’.
Click ‘Install’ and install any dependencies if required, such as Kubectl and Helm.

VSCode Kubernetes

When installation is complete, select ‘Kubernetes’ from the Activity bar to open.

K8s Extension Installed

In the Kubernetes side bar, hover over Clusters and click the ellipses ‘…’ to select ‘Set kubeconfig’.
Select ‘+ Add new kubeconfig’ and navigate to the config file copied previously (%USERPROFILE%\.kube\config).
The Clusters node should now be connected to the remote cluster and the nodes can be expanded to show the cluster hosts.

K8s Cluster Configured

Expand a node to see the pods.

K8s Cluster Nodes

In the Terminal window, execute kubectl get nodes to get listing of the cluster nodes

K8s kubectl nodes

Select Explorer extension on the Activity pane and we can now edit object files and apply them with kubectl directly from Windows.

K8s File Explorer

Blinkt! Rainbow on Pi Kubernetes

Summary

Blinkt! is an LED light strip attachable to the GPIO extension on Raspberry Pi’s. Sealsystems provides a rainbow container that creates a rainbow of the LEDs and can be orchestrated on a Docker cluster using the Swarm orchestrator. This is a description of how to run the rainbow on a Kubernetes cluster.

Rainbow Pi Cluster

Prerequisites

Description

The Rainbow container uses the node package (npm), node-blinkt, to interact with the LEDs through the host GPIO. To access the GPIO, the container will need access to the host’s /sys directory which acts as a virtual filesystem for the system devices (/sys/devices/gpiochip2/gpio/). Mount the host’s /sys directory to the container and the Blinkt! package will be able to access the GPIO.

        containers:
             - name: rainbow
               image: sealsystems/rainbow:latest
               volumeMounts:
                       - name: sys
                         mountPath: /sys
        volumes:
             - name: sys
               hostPath: 
                   path: /sys

There is only one GPIO and Blinkt! light per host so running the rainbow container as a DaemonSet will ensure that only one (1) container is run on each host in the cluster.

apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: rainbow

By default the master node will be excluded from cluster nodes in the DaemonSet so in a 5-node cluster, only the (four) 4 worker nodes will light up.

Rainbow Pi Cluster
Default DaemonSet on 5-node cluster

Use tolerations to enable the DaemonSet to to run on the master node.

spec:
# this toleration is to have a daemonset run or not run on the master node
# key - node key to identify master node
# effect - specifies whether to run (NoSchedule) or not run (NoExecute)
    tolerations:
         - key: node-role.kubernetes.io/master
           effect: NoSchedule
Rainbow Pi Cluster
DaemonSet on 5-node cluster with NoSchedule tolerations

Steps

Create yaml file, rainbow.yml

apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: rainbow
spec:
    selector:
        matchLabels:
            name: rainbow
    template:
        metadata:
            labels:
                name: rainbow
        spec:
            tolerations:
                 - key: node-role.kubernetes.io/master
                   effect: NoSchedule
            containers:
                 - name: rainbow
                   image: sealsystems/rainbow:latest
                   volumeMounts:
                           - name: sys
                             mountPath: /sys
            volumes:
                 - name: sys
                   hostPath: 
                       path: /sys

Apply to cluster:
kubectl apply -f rainbow.yml