Go Docker: Hello-Go with multi-stage build

Summary

This post covers a Hello-Go web application written in Go and hosted in a docker container. The solution will use Docker multi-stage builds to create the container image and display ‘Hello-Go’ on web requests.

Prerequisites

Description

Go is an open source project from Google that is based on the C programming language. It is a compiled language with multi-platform support across Linux, Windows, macOS and more. With the Go runtime performance and succinct syntax of its programming language, Golang, it is well suited for making a lightweight web application. The compiled application will be published as a Docker image so it can run on a container platform.

A Docker multi-staged build process will be used to separate the build and runtime images. Separating the build and runtime provides the benefit of removing non-essential runtime files and applications to reduce the image size. Removing the unrelated applications is also a security hardening component to reduce attack vectors for when the container is running.

The following process represents the subsystems involved in building the Hello-Go web application.

Process for Go source code to Docker image via multi-stage build

In this example we’ll build the image for the Linux kernel using Windows10 WSL2.

Steps

1. Using Windows Terminal, open a WSL Linux terminal (such as Ubuntu), create a source folder, hello-go, and open the folder in Visual Studio Code

Windows terminal with Ubuntu creating a source folder and open VSCode.

2. Create a source file named ‘hello-go.go’

VSCode with a hello-go.go source file

3. Enter the following code for the hello-go web application

package main

import (
	"fmt"
	"log"
	"net/http"
)

func main() {
	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprintf(w, "Hello-Go")
	})
	log.Fatal(http.ListenAndServe(":8080", nil))
}
Hello-go source code in VSCode

4. Create a docker file for the multi-stage build named ‘Dockerfile’ (no extension)

Dockerfile for docker image build

5. First section of the Dockerfile, we define the base image that will be used for the runtime image and expose the web application port the hello-go application is listening on, port 8080.

FROM alpine:latest as base
EXPOSE 8080

6. Next we define the build image that contains the Go compiler we can use to compile the .go source. We’ll create a /build folder and compile using the ‘go build’ command.

FROM golang:1.15.2-alpine3.12 as build
RUN mkdir /build
ADD . /build
WORKDIR /build
RUN go build -o hello-go .

7. With the application compiled, we can add the final stage by copying the build output to the runtime image defined at the beginning of the Dockerfile. We create an /app folder in the runtime image, copy the build to it, and specify the command to execute the application when the container runs

FROM base as final
RUN mkdir /app
WORKDIR /app
COPY --from=build /build .
CMD ["/app/hello-go"]
Complete Dockerfile entered with multi-stage build configuration

8. In Visual Studio Code, go to the Terminal menu and select New Terminal

9. Enter and run the following docker command to build and tag, hello-go:latest, the docker image

docker build -t hello-go:latest .

10. Verify the docker build completed successfully

Compilation success output in terminal after running Docker Build

11. Run the container and publish port 8080 on the host so it is accessible

docker run -d -p 8080:8080 hello-go:latest

12. Call the web application using ‘curl’

curl http://localhost:8080

13. Verify that the curl response displays ‘Hello-Go’

VSCode terminal window used to request the url with curl and displaying the Hello-Go output

14. List the build and runtime images to see the size differences

docker images | grep 'hello-go\|golang'
VSCode terminal output shows image sizes of the golang build and hello-go runtime images

15. Notice the size difference of 300MB for the build image, golang, and 12MB for the runtime image, hello-go.

Blazor App: An error has occurred. This application may no longer respond until reloaded. Reload

Summary

Running a Blazor Server application and refreshing the page sometimes creates the following error for the user:

An error has occurred. This application may no longer respond until reloaded. Reload
Blazor Server application in web browser display error message.
Example error message displayed to the user

Turning on browser debugging may show connection errors similar to the following:

blazor.server.js:1 WebSocket connection to 'ws://helloblazor.test/_blazor?id=gsHIh62GVm39WvDVxjpJMg' failed: Error during WebSocket handshake: Unexpected response code: 404
---
Error: Failed to start the transport 'WebSockets': Error: There was an error with the transport.
---
GET http://helloblazor.test/_blazor?id=oobUaSlKLPyrC5RPZVg3uw&_=1597287612026 404 (Not Found)
---
Error: Failed to start the transport 'LongPolling': Error: Not Found
---
Error: Failed to start the connection: Error: Unable to connect to the server with any of the available transports. WebSockets failed: Error: There was an error with the transport. ServerSentEvents failed: Error: 'ServerSentEvents' does not support Binary. LongPolling failed: Error: Not Found
---
Error: Error: Unable to connect to the server with any of the available transports. WebSockets failed: Error: There was an error with the transport. ServerSentEvents failed: Error: 'ServerSentEvents' does not support Binary. LongPolling failed: Error: Not Found
---
Uncaught (in promise) Error: Cannot send data if the connection is not in the 'Connected' State.
    at e.send (blazor.server.js:1)
    at e.sendMessage (blazor.server.js:1)
    at e.sendWithProtocol (blazor.server.js:1)
    at blazor.server.js:1
    at new Promise (<anonymous>)
    at e.invoke (blazor.server.js:1)
    at e.<anonymous> (blazor.server.js:15)
    at blazor.server.js:15
    at Object.next (blazor.server.js:15)
    at blazor.server.js:15

These errors show that two (2) types of connection transports were attempted, (1) WebSocket and (2) LongPolling (ServerSentEvents), and both failed to connect.

This connection error can occur when the host is running in a load-balanced server environment. In that type of environment, change the load balancer algorithm for the user to have server affinity (also referred to as sticky sessions). Enabling server affinity ensures the user connection is re-established to the same server on refreshes.

Description

Load balancing two (2) or more servers helps ensure application availability in case one of the servers experience an outage. A server outage could be caused by component failure (hardware/software) or by being overloaded relative to its physical capacity (CPU, Memory, Disk, Network). Adding a second server in a load balanced configuration allow users to be connected to any of the servers that are available to accept the request.

Load Balancer illustration for distributing load between two servers and to one server if the other becomes unavailable

For many web server type applications, the content the server provides to the user does not require the user to stay connected to the server for long durations. They work by a request-response pattern where the user’s browser requests content, such as ‘homepage.htm’, and the server responds by sending the content back. Once the request-response round-trip has completed the user has a local copy from the server and disconnects. Next, if the user refreshes the browser for the same homepage.htm, a 2nd request-response round trip is performed. As the user was disconnected from the server after the 1st request-response completed, a load balancer may forward the 2nd request to a different server to respond.

Illustration of 1st and 2nd request to a page being load balanced to 2 different web servers

When a load balancer operates in this mode where a request can receive a response from any server, it is referred to as having ‘no affinity’. When no affinity is used, the load balancer can have different ways (aka algorithms) for how it chooses to distribute the requests between the available servers. A common algorithm is round-robin where the load balancer gives each server a turn when a request is received. When all available servers have taken a turn, it starts over again from the first server in the round-robin list.

Image of round robin load balancing
Load Balancer with No Affinity and Round-Robin algorithm

As a load balancer can support ‘no affinity’, it can also support ‘affinity’. With affinity enabled the load balancer will stick all requests from a specific user to the same server. This is also referred to as ‘sticky sessions’. The session part stems from the stickiness often being temporal in nature either due to an expiration by the load balancer or the server becoming unavailable. If the server becomes unavailable, the load balancer will establish affinity to a new server for the user.

Image of load balancer with affinity
Load Balancer configured with Affinity

With the background of load balancing and affinity rules behind us, lets come back to the beginning of this article where the user is experiencing an error in the Blazor Server application. What we reviewed on the web server request-response pattern above is referred to as a one-way communication. The user calls the web server and responds, but the web server does not call the user. This one-way communication is foundational to the HTTP protocol used on the Internet.

A Blazor Server application works over a WebSocket protocol. This protocol allows two-way communication where both the user and web server can initiate requests to each other. For this two-way communication to work, the user is the initiating party that starts a request to the server over the HTTP protocol and then negotiates a protocol transition to WebSocket if supported. Once the WebSocket connection is established, both the user and the web server can initiate a call to each other, thus enabling two-way communication.

For the web server to call the user, it needs to know which of its connections is connected to what user to ensure it is sending the request to the right recipient. This logic is provided by the Blazor Server framework and is transparent to the application code but conceptually looks something similar to the following:

Image depicting User connections on Web Server for WebSocket protocol

As we experienced in the early days of cellular phones (thankfully less these days), dropped calls can happen. With cellular phones we call the party back and continue the conversation where we left off because with both have ‘memory’ of us talking to each other. Like cellular phone communication, a WebSocket connection may experience a ‘drop’ and will need to be re-established. In order to keep the communication going from the point where it dropped, both the user and web server needs ‘memory’ of each other like people with cellular phones. This memory is held on both the user and web server side so when communication is re-established both parties remember each other and continue from where they dropped. All the user and web server memory and communication reconnects are done for us by the Blazor Server framework.

This memory and reconnect works fine when the load balancer only sends us to one web server as above. But what happens to our memory on a reconnect if we are in a multi-web server environment that is load balanced with no affinity?

Image of reconnect to new web server thru load balancer with no affinity
Blazer Server application reconnect thru load balancer with no affinity

As depicted above, if the user was having a two-way communication with Server 1 that drops, then the ‘memory’ of the communication is between the user and Server 1. When the connection reconnects to Server 2, Server 2 will have no memory of the prior communication. It’s like calling your friend back on the cellular phone and continue the conversation only to realize you called the wrong number. The Blazor Server framework accounted for this situation so instead of having Server 2 play along in a conversation it ha no memory of, it responds with a ‘wrong number’ (404 Not Found) to inform the user that it will not establish the connection. This ultimately leads to a connection error on the user side which can be seen in the browser debugger with the following connection errors:

blazor.server.js:1 WebSocket connection to 'ws://helloblazor.test/_blazor?id=gsHIh62GVm39WvDVxjpJMg' failed: Error during WebSocket handshake: Unexpected response code: 404
---
Error: Failed to start the transport 'WebSockets': Error: There was an error with the transport.
---
GET http://helloblazor.test/_blazor?id=oobUaSlKLPyrC5RPZVg3uw&_=1597287612026 404 (Not Found)

That is very polite gesture instead of running a prank and attempt to play along in a conversation it knows nothing about 🙂

The Blazor Server framework comes with an additional backup transports if WebSockets fail, including ServerSentEvents (SSE) and long-polling. However, as the reconnect is happening to the wrong server, all three (3) connections fail as seen in the subsequent errors:

Error: Failed to start the transport 'LongPolling': Error: Not Found
---
Error: Failed to start the connection: Error: Unable to connect to the server with any of the available transports. WebSockets failed: Error: There was an error with the transport. ServerSentEvents failed: Error: 'ServerSentEvents' does not support Binary. LongPolling failed: Error: Not Found
---
Error: Error: Unable to connect to the server with any of the available transports. WebSockets failed: Error: There was an error with the transport. ServerSentEvents failed: Error: 'ServerSentEvents' does not support Binary. LongPolling failed: Error: Not Found

As indicated by the errors, ServerSentEvents never actually tried to connect because the transport errored out due to the binary message format not being supported over that transport. Both WebSocket and long-polling were attempted to reconnect.

This is analogous to first calling the friend back with the wrong number on the cellular phone (WebSocket) and then retrying the wrong number on a landline (long-polling). Both phones are connecting to the wrong number. The end result is a connection failure as both attempts failed:

Uncaught (in promise) Error: Cannot send data if the connection is not in the 'Connected' State.
    at e.send (blazor.server.js:1)
    at e.sendMessage (blazor.server.js:1)
    at e.sendWithProtocol (blazor.server.js:1)
    at blazor.server.js:1
    at new Promise (<anonymous>)
    at e.invoke (blazor.server.js:1)
    at e.<anonymous> (blazor.server.js:15)
    at blazor.server.js:15
    at Object.next (blazor.server.js:15)
    at blazor.server.js:15

With the understanding of load balancing affinity behavior we can then change the load balancer from ‘no affinity’ to ‘affinity’ to ensure that the load balancer always sends the user back to the same server upon reconnects.

Image of load balancer configured with affinity allowing Blazer Server reconnects
Load Balancer configured with affinity allowing Blazer Server reconnects

With the load balancer affinity enabled we should now be able to reconnect successful and verify the connection handshake succeeded in the browser debugger:

Information: WebSocket connected to ws://helloblazor.test/_blazor?id=NfBNcfX6EMrOjIxWpDUwsg.

This is not the only cause for an error has occurred, but it may be one to explore as well as reviewing the browser debugger information for additional information to help troubleshoot. As multiple replicas are common for server availability and easy to do with Docker containerization it can be an early cause to write off the troubleshooting list.

See Docker Blazor App on Linux ARM for additional information on containerizing Blazor Server applications.

Azure DevOps: ./config.sh: line 85: ./bin/Agent.Listener: cannot execute binary file: Exec format error

Summary

Registering a Deployment Group agent for Azure DevOps generates the following error using the Linux registration script on a Linux ARM 32-bit operating system (such as Raspberry Pi):

./config.sh: line 85: ./bin/Agent.Listener: cannot execute binary file: Exec format error

Change the link to the agent package in the default registration script from Linux-x64 to ARM.

from 'vsts-agent-linux-x64-2.173.0.tar.gz' to 'vsts-agent-linux-arm-2.173.0.tar.gz'

The resulting registration script will look similar to the following:

mkdir azagent;cd azagent;curl -fkSL -o vstsagent.tar.gz https://vstsagentpackage.azureedge.net/agent/2.173.0/vsts-agent-linux-arm-2.173.0.tar.gz;tar -zxvf vstsagent.tar.gz; if [ -x "$(command -v systemctl)" ]; then ./config.sh --deploymentpool --deploymentpoolname "SandboxDeploy" --acceptteeeula --agent $HOSTNAME --url https://dev.azure.com/torben/ --work _work --runasservice; sudo ./svc.sh install; sudo ./svc.sh start; else ./config.sh --deploymentpool --deploymentpoolname "SandboxDeploy" --acceptteeeula --agent $HOSTNAME --url https://dev.azure.com/torben/ --work _work; ./run.sh; fi

Note: version numbers (2.173.0) are updated over time so the above referenced versions may have changed since this writing. The deployment pool name and Azure DevOps account will also be unique to the setup.

Re-run the registration script and the exec format error should be resolved with a processor / agent match.

Description

After executing the deployment group registration on a Linux ARM processor architecture, the agent may fail to start with the following error:

The ‘Exec format error’ indicates that the application was not compiled for the target processor architecture, ARM 32-bit. In reviewing the registration script, it becomes clearer that the script is pulling an agent for a Linux x64 target architecture:

Unfortunately, there is not a mechanism to select the processor type on the deployment pool registration screen as it only has the kernel (Windows or Linux) as an option, not processor architecture ARM or x64/x86. However, the deployment group targets are based on the same agent application used for Agent Pools so we can get the Url for the ARM version by going to Agent Pools, add an Agent, and select Linux : ARM.

vsts-agent-linux-arm-2.173.0.tar.gz
Azure DevOps image for Add Agent and selecting kernal and processor architecture for the agent binary

Next we change the original Deployment Group registration script by replacing

vsts-agent-linux-x64-2.173.0.tar.gz

with the arm version and rerun the updated deployment group registration script.

mkdir azagent;cd azagent;curl -fkSL -o vstsagent.tar.gz https://vstsagentpackage.azureedge.net/agent/2.173.0/vsts-agent-linux-arm-2.173.0.tar.gz;tar -zxvf vstsagent.tar.gz; if [ -x "$(command -v systemctl)" ]; then ./config.sh --deploymentpool --deploymentpoolname "SandboxDeploy" --acceptteeeula --agent $HOSTNAME --url https://dev.azure.com/torben/ --work _work --runasservice; sudo ./svc.sh install; sudo ./svc.sh start; else ./config.sh --deploymentpool --deploymentpoolname "SandboxDeploy" --acceptteeeula --agent $HOSTNAME --url https://dev.azure.com/torben/ --work _work; ./run.sh; fi

As the processor architecture now matches ARM 32-bit, the registration should succeed with the agent running as a service.

Image of Linux console showing the deployment group agent as active and running as a service

The agent will now be available for handling Azure DevOps Pipeline Releases and perform deployments to the target resource(s).

Next Steps

  • Presumably this is a temporary solution until the Azure DevOps user interface for Deployment Groups gets updated to match the Agent Pool kernel and processor selection or equivalent.
  • For multi-processor agent pools, refer to ‘Targeting multi-processor architectures with Azure DevOps‘ for additional details.