About torbenp

Torben is a Dane lost and Texas with a passion for technology.

COPY failed: stat /var/lib/docker/tmp/docker-builder392878620/Hello-Blazor/Hello-Blazor.csproj: no such file or directory

Summary

Running docker build within the Dockerfile folder causes no such file or directory error.

C:\Hello-Blazor\docker build --force-rm -t helloblazor:latest .
Sending build context to Docker daemon  2.653MB
...
COPY failed: stat /var/lib/docker/tmp/docker-builder392878620/Hello-Blazor/Hello-Blazor.csproj: no such file or directory

Move up one folder (parent folder of Dockerfile) to change the build context and provide path to the Dockerfile.

C:\docker build -f .\Hello-Blazor\Dockerfile --force-rm -t helloblazor:latest .

Description

When building a Docker image from a Visual Studio generated Dockerfile, such as a .NET Core project, the error ‘no such file or directory’ can occur if the right build context is missing. In Visual Studio generated solutions and projects, the Solution file (.sln) is often in a parent folder to the Project files and their individual Dockerfiles. This makes the Visual Studio solution folder the root build context to build all projects and this build context is assumed in the compiler generated Dockerfile for projects.

C:.
+---Hello-Blazor.sln    //Build Context
+---Hello-Blazor
|   +---Hello-Blazor.csproj
|   +---Dockerfile

The autogenerated Dockerfile for multi-staged builds will show the solution folder build context in the COPY and RUN operations.

...
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Hello-Blazor/Hello-Blazor.csproj", "Hello-Blazor/"]
RUN dotnet restore "Hello-Blazor/Hello-Blazor.csproj"
...

To build the Docker image, use the solution folder as the build context (current folder) and use the -f parameter to pass the Dockerfile path instead of building directly from the Dockerfile folder.

C:\docker build -f ./Hello-Blazor/Dockerfile --force-rm -t helloblazor:latest .

Manage Kubernetes with Visual Studio Code

Summary

This is a walkthrough to setup Visual Studio Code (VSCode) to manage an external Kubernetes cluster. In this example, VSCode will be connected to a Kubernetes Cluster running on Raspberry Pi. The VSCode extension provides visual object navigation of the cluster and terminal execution of kubectl to modify cluster objects.

Prerequisites

Description

The following steps will reference a Linux Kubernetes cluster as the source system and a Windows environment as the target system. For these steps there is an assumption that the Kubernetes configuration file (kubeconfig) has been added to the user profile of the master node on the Linux cluster.

pirate – is the Linux user profile (based on the Hypriot distribution)
192.168.0.20 / k8spiblue – is the Linux master node IP / hostname
192.168.0.21 / k8spiblack – is a Linux worker node IP / hostname
192.168.0.22 / k8spiyellow – is a Linux worker node IP / hostname
torben – is the Windows user profile

Steps

From the Windows target system, copy the Kubernetes configuration file (kubeconfig) from the Linux user profile to the Windows user profile.

C:\Users\Torben>scp pirate@192.168.0.20:/home/pirate/.kube/config ./.kube/config

Enter a password for the user (pirate) when prompted and perform a directory listing for verification

Copy kubeconfig

Launch VSCode, click Extensions on the Activity bar and search the Marketplace for ‘Kubernetes’.
Click ‘Install’ and install any dependencies if required, such as Kubectl and Helm.

VSCode Kubernetes

When installation is complete, select ‘Kubernetes’ from the Activity bar to open.

K8s Extension Installed

In the Kubernetes side bar, hover over Clusters and click the ellipses ‘…’ to select ‘Set kubeconfig’.
Select ‘+ Add new kubeconfig’ and navigate to the config file copied previously (%USERPROFILE%\.kube\config).
The Clusters node should now be connected to the remote cluster and the nodes can be expanded to show the cluster hosts.

K8s Cluster Configured

Expand a node to see the pods.

K8s Cluster Nodes

In the Terminal window, execute kubectl get nodes to get listing of the cluster nodes

K8s kubectl nodes

Select Explorer extension on the Activity pane and we can now edit object files and apply them with kubectl directly from Windows.

K8s File Explorer

Blinkt! Rainbow on Pi Kubernetes

Summary

Blinkt! is an LED light strip attachable to the GPIO extension on Raspberry Pi’s. Sealsystems provides a rainbow container that creates a rainbow of the LEDs and can be orchestrated on a Docker cluster using the Swarm orchestrator. This is a description of how to run the rainbow on a Kubernetes cluster.

Rainbow Pi Cluster

Prerequisites

Description

The Rainbow container uses the node package (npm), node-blinkt, to interact with the LEDs through the host GPIO. To access the GPIO, the container will need access to the host’s /sys directory which acts as a virtual filesystem for the system devices (/sys/devices/gpiochip2/gpio/). Mount the host’s /sys directory to the container and the Blinkt! package will be able to access the GPIO.

        containers:
             - name: rainbow
               image: sealsystems/rainbow:latest
               volumeMounts:
                       - name: sys
                         mountPath: /sys
        volumes:
             - name: sys
               hostPath: 
                   path: /sys

There is only one GPIO and Blinkt! light per host so running the rainbow container as a DaemonSet will ensure that only one (1) container is run on each host in the cluster.

apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: rainbow

By default the master node will be excluded from cluster nodes in the DaemonSet so in a 5-node cluster, only the (four) 4 worker nodes will light up.

Rainbow Pi Cluster
Default DaemonSet on 5-node cluster

Use tolerations to enable the DaemonSet to to run on the master node.

spec:
# this toleration is to have a daemonset run or not run on the master node
# key - node key to identify master node
# effect - specifies whether to run (NoSchedule) or not run (NoExecute)
    tolerations:
         - key: node-role.kubernetes.io/master
           effect: NoSchedule
Rainbow Pi Cluster
DaemonSet on 5-node cluster with NoSchedule tolerations

Steps

Create yaml file, rainbow.yml

apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: rainbow
spec:
    selector:
        matchLabels:
            name: rainbow
    template:
        metadata:
            labels:
                name: rainbow
        spec:
            tolerations:
                 - key: node-role.kubernetes.io/master
                   effect: NoSchedule
            containers:
                 - name: rainbow
                   image: sealsystems/rainbow:latest
                   volumeMounts:
                           - name: sys
                             mountPath: /sys
            volumes:
                 - name: sys
                   hostPath: 
                       path: /sys

Apply to cluster:
kubectl apply -f rainbow.yml