Build a Kubernetes cluster on Azure Container Service

0
42

running containers in development vs. production. Modern, production-grade applications need to scale to meet fluctuations in demand, and the infrastructure they run on needs to be resilient to individual component failures. As a result, most public cloud providers offer services that can accommodate containers in production, such as Microsoft’s Azure Container Service.

With Azure Container Service, you can build a cluster of VMs that are preconfigured with Docker container support. Along with that cluster, you select an open source container orchestration tool — Docker Swarm, DC/OS or Kubernetes — to manage and scale your containerized apps. These orchestration tools ensure that the services the containers provide can be load balanced across multiple nodes in the cluster and can scale horizontally to meet spikes in demand.

Kubernetes, which Google developed and open sourced in 2014, has become especially popular. The platform is now the standard to manage containerized applications in production.

Kubernetes on Azure Container Service: Step by step

Azure Container Service, along with its support for Kubernetes, is still relatively new. Because of that, building a Kubernetes cluster in Azure Container Service requires several steps. Let’s take a look at the current process, and explore how to deploy and scale a service that is powered by Docker containers.

Step 1: Install the Azure CLI 2.0

You can use the Azure portal as a graphical interface to build an Azure Container Service cluster. Click on New in the portal and search for Azure Container Service, and then create a new resource. This allows you to launch an Azure Resource Manager template to build a new Azure Container Service cluster. However, to do this, you’ll need to first create a Secure Socket Shell (SSH) key pair and Azure Active Directory (AD) service principal. Microsoft has well documented this process, but it still involves a number of steps.

Alternatively, it’s easier to use the Azure command-line interface (CLI) to create your cluster. With a single command, you can create the cluster, as well as generate SSH keys and the required Azure AD service principal.

The Azure CLI is a cross-platform, so you can use it on Windows, Mac and Linux. The installation steps are covered in detail here.

Step 2: Create the Azure Container Service cluster

Once you install the Azure CLI, you can create the cluster. Below, you can see the command I used to build mine. Note that this is a single command, broken up with the character for line continuation:

az acs create –name kubecluster
–dns-prefix k8scluster2017
–resource-group ACSRG
–orchestrator-type kubernetes
–generate-ssh-keys

You can see I used the az acs create command to create the Azure Container Service cluster. You’ll want to provide your own unique DNS prefix and Resource Group name. First, create an empty Resource Group and use the name of your new group with the –resource-group parameter.

After five to 10 minutes, you will have a cluster with one Kubernetes master VM and three agent node VMs. The agents will be the VMs that host the containers for the services that run on the cluster.

Step 3: Launch a service on the Azure Container Service cluster

To manage the cluster, install the Kubernetes CLI using the az acs kubernetes install-cli command.

Then, you can use the kubectl command to manage your Kubernetes cluster, which includes a number of options, such as viewing your current nodes and deploying your first service. First, use the kubectl get nodes command to view the nodes in the cluster (Figure 1).

View nodes with kubectl
Figure 1. View nodes with kubectl

In Figure 1, we have three agent nodes that can host services, as well as a master node that controls the cluster.

To launch a new service on the cluster, you can select from a number of container images, but you can also, as an example, use the kubectl run nginx –image nginx command to create an nginix service. This will launch a new container with the nginx image that will run on one of the agent nodes.

After you create that service, you can enable public access to it. To do this, expose the service through an Azure Load Balancer, which you would create as part of the Azure Container Service deployment, using the following command:

kubectl expose deployments nginx –port=80 –type=LoadBalancer

Step 4: Scale a service through the web UI

In addition to the command-line tools, you can access the Kubernetes web UI to manage and scale your services. Use the kubectl proxy command to create a proxy to the Kubernetes master node.

Now, you can open up a web browser locally and navigate to http://localhost:8001/ui. You should see a web console similar to the one below in Figure 2.

Kubernetes web console
Figure 2. Kubernetes web console

On the left, under Workloads, click on Deployments. You’ll see a screen similar to the one below in Figure 3.

Kubernetes deployments
Figure 3. Kubernetes deployments

Notice that the deployment in Figure 3 runs our nginx service on a single pod. The pods in Kubernetes are the agent nodes that run as VMs in Azure.

To manually scale the service and add additional nodes, click the drop-down just to the right of the service name and click on View/edit YAML. The “replicas” property should currently be set to 1. You can edit this field to 2 or 3 to manually scale the service. Click on Update when complete (Figure 4).

Scale Kubernetes
Figure 4. Scale Kubernetes

At this point, you can navigate back to Deployments to see that multiple pods now support the nginx service (Figure 5).

Kubernetes deployments after scaling
Figure 5. Kubernetes deployments after scaling

Kubernetes has gained a reputation for being one of the most mature, but also one of the most complex, container orchestration engines available. Although admins must follow a number of steps to set up a Kubernetes cluster on Azure Container Service, it’s still easier than it would be to build one from scratch.



Source link

LEAVE A REPLY