A service-like container instance deployment method is removing some of the roadblocks to container-based apps in the public cloud.
To capitalize on the benefits of public cloud computing — usage-based costs, minimal time to deployment, easily scaled capacity, reduced administrative overhead, no capital expenses — container-based applications typically require a preparatory foundation in the form of VMs. The server virtualization layer supports the container runtime engine, as well as cluster management software, service discovery and a container repository.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In many container deployment scenarios, the VM layer is an unnecessary hurdle. For organizations to run a pilot container deployment, spin up and tear down test environments or otherwise experiment with containers in limited situations, the cloud provider must streamline planning and implementation.
With Azure Container Instances (ACI), the user spins up containers as a service in these scenarios: started and decommissioned as necessary with the infrastructure or admin overhead hidden behind service abstraction. ACI — not to be confused with the Cisco product, Application Centric Infrastructure — handles the underlying runtime infrastructure for users who work with individual containers or container groups.
Azure Container Instances has benefits and limitations, and potential users should understand its relationship to orchestration systems, like Kubernetes, and suitable use cases.
Azure Container Instances features
Much like serverless offerings, such as Azure Functions and AWS Lambda, or a platform as a service (PaaS), such as Azure Web Apps, Microsoft’s ACI turns containers into a lightweight service that can be rapidly instantiated, used as needed then decommissioned at will. Azure Container Instances are available almost instantly and have a by-the-second pricing model. Imagine using a container for a minute or less and paying accordingly. Major PaaS stacks support a broad selection of languages, but the range is not unlimited. Unlike a PaaS, ACI suits applications written in any language and specific versions. Azure Container Instances runs a customizable container image so the user can customize the runtime image configuration with the right development framework, version and system configuration.
ACI also offers embedded hypervisor-level security. Microsoft doesn’t document the details, but given its description of ACI isolation, we can conjecture that the service uses Hyper-V containers. This setup would put each container instance in nested virtualization on an Azure VM.
Capacity is customizable, with configurable core count and memory. By default, ACI containers are stateless, but the service supports Azure file shares (for Linux containers only at the time of writing), meaning instances can mount one or more shares to read and write application data and state.
Azure Container Instances supports standard Docker images that can be pulled from an external registry, such as the public Docker Hub or Azure Container Registry.
There are limitations to ACI. It supports public IP only, with future support for private virtual networks and other Azure network services, such as load balancers. While it works with both Linux and Windows container images, not all Windows container features are supported. Containers can have up to four virtual CPUs and 14 GB memory. Service limits, such as the maximum rate of container creation and deletion, are documented here. Azure Container Instances’ newness means that it might not yet be available in every region where your company operates.
Similar to Kubernetes Pods, ACIs run multicontainer groups on a shared host, network and storage. These resources are scheduled together for deployment. Rely on container groups to divide an application into multiple modules: one for the core app code, another for logging and monitoring and a third to pull data from a database or source code repository, for example.
What will it cost? Microsoft’s ACI pricing is based on a flat rate for each instance created and a variable rate that is a function of the instance memory size, core count and time used in per-second increments. For example, an instance with two cores with 3 GB that’s created 50 times a day and used for two-and-a-half minutes each time would cost $17.25 for the month (see the figure).
Generally, applications with highly variable or sporadic workloads could be cheaper to run with ACI than traditional cloud hosting methods, but there’s no easy way to compare costs generally from one provider to another, as application design and workload characteristics heavily influence cost. The speed with which container instances spin up, the ability to maintain state and the granular pricing model for ACI encourage its use for projects with aggressive deployment and decommissioning. Estimate the cost of other simple or complex scenarios, including ones that involve other Azure services, via its online cost calculator.
ACI and container clusters, Kubernetes
Azure Container Instances does not include the ability to deploy a multinode container cluster; however, that doesn’t limit users to container groups only on a single machine. Microsoft developed an open source ACI connector for Kubernetes that allows Kubernetes to deploy to the Azure instances. Since Microsoft’s Azure Container Service (AKS) uses Kubernetes as its cluster management and orchestration platform, users can mix and match ACI with container instances running on self-managed clusters using VM instances. The ACI service finds the host target for a particular container when AKS is paired with ACI. The service’s inherent scalability means that a host is always available. Kubernetes’ sole purpose is to manage the deployment, upgrades and scaling of multicontainer workloads. There are some compelling hybrid scenarios for ACI, when combined with Kubernetes and Azure compute instances. Consider ACI in this context for burstable loads and fast scaling, while conventional VMs in a Kubernetes cluster handle steady-state workloads or those with more predictable patterns of scaling.
Best ACI uses
While ACI works with Kubernetes and with other Azure services as part of composite application designs, the target deployment model is made up wholly of containerized applications that can be serviced by a single container group and, thus, doesn’t need more than 60 quad-core containers. For such cloud-native applications, ACI is an intriguing option.
Microservices with multiple event-driven components benefit from the almost instantaneous container startup and granular billing model of ACI. Also, consider the option for test and development environments separate from production and to create a sandbox for container experiments.
ACI vs. AWS Fargate, offerings from Google
Although Azure Container Instances is the first pure, VM-less container service, AWS quickly countered with Fargate to abstract away container clusters. At the time of publication, Fargate, released at the re:Invent conference in November 2017, is only available in one AWS region, and documentation is sparse. One early user comparing the two indicated that Fargate does come “with all the configuration and setup flexibility/baggage [of Elastic Container Service] attached.”
Google Cloud Platform doesn’t offer anything directly comparable to ACI. However, Google Kubernetes Engine, its managed Kubernetes service that uses Google Compute Engine compute instances, has similar operating characteristics due to the efficiency of GCE and a granular billing model. GCE instances can boot and run startup code within a minute, which makes possible an ACI-like, event-driven deployment model for Kubernetes cluster nodes. Since the Google App Engine PaaS runs each instance in a container, users can construct a managed container service via GAE in a so-called “flexible environment,” which consists of a custom runtime environment specified as a Dockerfile.
Azure Container Instances and AWS Fargate are the start of a trend; expect all the dominant cloud providers to embrace and extend the notion of fully managed container services that insulate users from the underlying infrastructure.