“Packing windows”: exploring container technology from Microsoft. A Brief Introduction to Windows Containers Application Container

Exploring container technology
Windows Server 2016

One of the notable new features introduced in Windows Server 2016 is support for containers. Let's get to know her better

Modern systems have long moved away from the principle of one OS - one server. Virtualization technologies allow more efficient use of server resources, allowing you to run multiple operating systems, separating them from each other and simplifying administration. Then there were microservices that allow you to deploy isolated applications as a separate easily managed and scalable component. Docker has changed everything. The process of delivering an application along with an environment has become so simple that it could not help but interest the end user. The application inside the container works as if using a full-fledged OS. But unlike virtual machines, they do not load their own copies of the OS, libraries, system files, etc. Containers receive an isolated namespace in which all necessary resources are available to the application, but which cannot be exited. If you need to change the settings, then only the differences with the main OS are saved. Therefore, the container, unlike virtual machines, starts up very quickly and loads the system less. Containers use server resources more efficiently.

Containers on Windows

In Windows Server 2016, in addition to existing virtualization technologies - Hyper-V and Server App-V virtual applications, support for Windows Server Containers containers has been added, implemented through the Container Management stack abstraction layer that implements all the necessary functions. The technology was announced back in Technical Preview 4, but since then a lot has changed towards simplification and instructions written earlier can not even be read. At the same time, two types of "own" containers were proposed - Windows containers and Hyper-V containers. And probably another main opportunity is the use of Docker tools in addition to PowerShell cmdlets to manage containers.

Windows containers resemble FreeBSD Jail or Linux OpenVZ in principle, they use one core with the OS, which, along with other resources (RAM, network), is shared among themselves. OS files and services are mapped into the namespace of each container. This type of container uses resources efficiently, reducing overhead, and therefore allows applications to be placed more densely. Since the base images of the container "have" one core with a node, their versions must match, otherwise operation is not guaranteed.

Hyper-V containers use an additional isolation layer and each container is allocated its own core and memory. Isolation, unlike the previous type, is performed not by the OS kernel, but by the Hyper-V hypervisor (requires the Hyper-V role). The result is less overhead than virtual machines, but more isolation than Windows containers. In this case, to run the container, have the same OS kernel. These containers can also be deployed on Windows 10 Pro/Enterprise. It is especially worth noting that the container type is not chosen at creation time, but at deployment time. That is, any container can be run both as Windows and as a Hyper-V variant.

As the OS in the container, the trimmed Server Core or Nano Server are used. The first one appeared in Windows Sever 2008 and provides greater compatibility with existing applications. The second is even more stripped down than Server Core and is designed to run without a monitor, allowing you to run the server in the smallest possible configuration for use with Hyper-V, file server (SOFS) and cloud services, requiring 93% less space. Contains only the most necessary components (.Net with CoreCLR, Hyper-V, Clustering, etc.).

For storage, the VHDX hard disk image format is used. Containers, as in the case of Docker, are saved as images in the repository. Moreover, each one does not save a complete set of data, but only the differences between the created image and the base one. And at the time of launch, all the necessary data is projected into memory. Virtual Switch is used to manage network traffic between the container and the physical network.

In March 2013, Soloman Hykes announced the start of an open source project that later became known as Docker. In the months that followed, it was strongly supported by the Linux community, and in the fall of 2014, Microsoft announced plans to implement containers in Windows Server 2016. WinDocks, which I co-founded, released an independent open-source version of Docker for Windows in early 2016 with a focus on first-class container support. in SQL Server. Containers are quickly becoming the focus of attention in the industry. In this article, we'll take a look at containers and their use by SQL Server developers and DBAs.

Container Organization Principles

Containers define a new method of packaging applications, combined with user and process isolation, for multitenant applications. Various container implementations for Linux and Windows have been around for years, but with the release of Windows Server 2016, we have the de facto Docker standard. Today, the Docker container API and format is supported on AWS public services, Azure, Google Cloud, all Linux and Windows distributions. Docker's elegant framework has important advantages.

  • Portability. Containers contain application software dependencies and run unchanged on a developer's laptop, a shared test server, and any public service.
  • Ecosystem of containers. The Docker API is the focus of industry innovation with solutions for monitoring, logging, data storage, cluster orchestration, and management.
  • Compatibility with public services. Containers are designed for microservices architecture, scale-out, and transient workloads. Containers are designed to be removed and replaced at will, rather than repaired or upgraded.
  • Speed ​​and economy. It takes a few seconds to create containers; effective multisubscriber support is provided. For most users, the number of virtual machines is reduced by three to five times (Figure 1).

SQL Server Containers

SQL Server has supported named instance multitenancy for ten years, so what's the value of SQL Server containers?

The point is that SQL Server containers are more practical due to their speed and automation. SQL Server containers are named instances, with data and settings, provisioned in seconds. The ability to create, delete, and replace SQL Server containers in seconds makes them more practical for development, QA, and other use cases discussed below.

With their speed and automation, SQL Server Containers are ideal for a production development and quality control environment. Each member of the team works with isolated containers in a shared virtual machine, with a three to five times reduction in the number of virtual machines. As a result, we get significant savings on the maintenance of virtual machines and the cost of Microsoft licenses. Containers can be easily integrated into Storage Area Network (SAN) arrays using storage replicas and database clones (Figure 2).

A 1TB attached database is instantiated in a container in less than one minute. This is a significant improvement over servers with dedicated named instances or provision of virtual machines for each developer. One company uses an octa-core server to serve up to 20 400 GB SQL Server containers. In the past, each VM took over an hour to provision, and container instances are provisioned in two minutes. Thus, it was possible to reduce the number of virtual machines by 20 times, reduce the number of processor cores by 5 times and dramatically reduce the cost of paying for Microsoft licenses. In addition, increased flexibility and responsiveness in business.

Using SQL Server Containers

Containers are defined using Dockerfile scripts, which provide specific steps for building a container. The Dockerfile shown in Figure 1 specifies SQL Server 2012 with databases copied to the container and a SQL Server script to mask selected tables.

Each container can contain dozens of databases with auxiliary files and log files. Databases can be copied and run in a container or mounted using the MOUNTDB command.

Each container contains a private file system isolated from host resources. In Figure 2, the container is built using MSSQL-2014 and venture.mdf. A unique ContainerID and container port are generated.


Figure 2: SQL Server 2014 container and venture.mdf

SQL Server containers provide a new level of performance and automation, but their behavior is exactly the same as regular named spaces. Resource management can be implemented using SQL Server Instrumentation or through container resource limits (Figure 3).

Other Applications

Containers are the most common means of organizing a development and QA environment, but other uses are emerging. Disaster recovery testing is a simple yet promising use case. Others include containerization of the internal SQL Server environment for legacy applications such as SAP or Microsoft Dynamics. The containerized backend is used to provide a working environment for support and ongoing maintenance. Evaluation containers are also used to support production environments with persistent data stores. In one of the following articles, I will talk in detail about persistent data.

WinDocks is committed to making containers even easier to use through a web interface. Another project focuses on migrating SQL Server containers in DevOps or Continuous Integration with CI/CD pipelines based on Jenkins or Team City. Today you can get familiar with using containers on all editions of Windows 8 and Windows 10, Windows Server 2012 or Windows Server 2016 with support for all editions starting with SQL Server 2008 using your copy of WinDocks Community Edition (https://www.windocks. com/community-docker-windows).

In today's Ask an administrator a question, I'll show you how to deploy an image in a container on Windows Server 2016, create a new image, and upload it to Docker.

One of the major new features in Windows Server 2016 is support for containers and Docker. Containers provide a lightweight and flexible virtualization experience that developers can use to quickly deploy and update applications without the overhead of virtual machines. And coupled with Docker, a container management solution, container technology has exploded over the past few years.

This is an update to information that was previously included in Deploying and Managing Windows Server Containers with Docker that was current for Windows Server 2016 Technical Preview 3. For more information about Docker, see What is Docker? and Are Docker containers better than virtual machines? on the Petri IT technical knowledge base.

To follow the instructions in this article, you'll need access to a physical or virtual server running Windows Server 2016. You can download an evaluation copy from the Microsoft website or set up a virtual machine on Microsoft Azure. You will also need a free Docker ID, which you can get by registering.

Install Docker Engine

The first step is to install Docker support on Windows Server 2016.

  • Sign in to Windows Server.
  • Click Search taskbar icon and type PowerShell in the search box.
  • Right click Windows PowerShell in the search results and select Run as administrator from the menu.
  • Enter administrator credentials when prompted.

To install Docker on Windows Server, run the following PowerShell cmdlet. You will be prompted to install NuGet, which downloads the Docker PowerShell module from a trusted online repository.

Install-Module -Name DockerMsftProvider -Force

Now use Install-Package cmdlet for installing the Docker engine on Windows Server. Note that a reboot is required at the end of the process.

Install-Package -Name docker -ProviderName DockerMsftProvider -Force Restart-Computer -Force

After restarting the server, re-run the PowerShell prompt and verify that Docker is installed by running the following command:

docker version

Download an image from Docker and start a container process

Now that the Docker engine is installed, let's pull the default Windows Server Core image from Docker:

docker pull microsoft/windowsServerCore

Now that the image has been uploaded to the local server, start the container process using docker launch:

Docker run Microsoft /windowsServerCore

Create a new image

We can now create a new image using the previously downloaded Windows Server image as a starting point. You will need a Docker ID before running. If you don't already have it, sign up for a Docker account.

Sponsors

Docker images are usually built from Dockerfile recipes, but for the purposes of the demonstration, we run a command on the uploaded image, create a new image based on the change, and then upload it to Docker so it's available from the cloud.

Note that on the command line below -t The parameter gives the image tag, allowing you to easily identify the image. Also, pay special attention to the hyphen that appears after the tag name.

"FROM Microsoft /windowsservercore `n CMD echo Hello World!" | docker build -t mydockerid /windows-test-image -

After Docker has finished creating the new image, check the list of available images on the local server. You should see both. Microsoft /windowsServerCore and mydockerid /windows-test-images in the list.

docker image

Now start a new image in the container, remembering to replace mydockerid with your Docker ID and you should see Hello World! Appear at the exit:

docker run mydockerid /windows-test-images

Upload an image to Docker

Let's upload the image we just created to Docker so it's available from the cloud. Login with your Docker ID and password:

Login to docker -u mydockerid -p mypassword

usage docker push to upload the image we created in the previous steps by replacing mydockerid with the name of your Docker ID:

Docker push mydockerid /windows-test-images

*nix systems natively implement multitasking and offer tools to isolate and control processes. Technologies such as chroot(), which provides isolation at the file system level, FreeBSD Jail, which restricts access to kernel structures, LXC and OpenVZ, have long been known and widely used. But the impetus in the development of technology was Docker, which made it possible to conveniently distribute applications. Now this has made its way to Windows.

Containers on Windows

Modern servers are over-performing, and applications sometimes don't use even parts of them. As a result, the systems “idle” for some time, heating the air. The solution was virtualization, which allows you to run several operating systems on one server, guaranteed to share them among themselves and allocate the required amount of resources to each. But progress does not stand still. The next stage is microservices, when each part of the application is deployed separately, as a self-sufficient component that can be easily scaled to the desired load and updated. Isolation prevents other applications from interfering with the microservice. With the advent of the Docker project, which simplified the process of packaging and delivering applications along with the environment, the microservices architecture received an additional impetus in development.

Containers are another type of virtualization that provides a separate environment for running applications, called OS Virtualization. Containers are implemented by using an isolated namespace, which includes all the resources (virtualized names) necessary for work, with which you can interact (files, network ports, processes, etc.) and which you cannot go beyond. That is, the OS shows the container only what is selected. The application inside the container thinks that it is the only one and runs in a full-fledged OS without any restrictions. If it is necessary to change an existing file or create a new one, the container receives copies from the main host OS, keeping only the changed parts. Therefore, deploying multiple containers on a single host is very efficient.

The difference between containers and virtual machines is that containers do not load their own copies of the OS, libraries, system files, and so on. The operating system is, as it were, shared with the container. The only additional requirement is the resources needed to run the application in the container. As a result, the container starts in a matter of seconds and loads the system less than in the case of virtual machines. Docker currently offers 180,000 applications in the repository, and the format is unified by the Open Container Initiative (OCI). But dependence on the kernel implies that containers will not work in another OS. Linux containers require the Linux API, so Windows won't work on Linux.

Until recently, Windows developers offered two virtualization technologies: virtual machines and Server App-V virtual applications. Each has its own niche of application, its pros and cons. Now the range has become wider - containers (Windows Server Containers) have been announced in Windows Server 2016. And although at the time of TP4 the development had not yet been completed, it is already quite possible to see the new technology in action and draw conclusions. It should be noted that, catching up and having ready-made technologies on hand, MS developers went a little further in some issues, so that the use of containers became easier and more versatile. The main difference is that there are two types of containers offered: Windows containers and Hyper-V containers. In TP3, only the first ones were available.

Windows containers use the same kernel as the OS, which is dynamically shared among themselves. The distribution process (CPU, RAM, network) is taken over by the OS. You can optionally limit the maximum available resources allocated to a container. OS files and running services are mapped to the namespace of each container. This type of container uses resources efficiently, reducing overhead, and therefore allows applications to be placed more densely. This mode is somewhat reminiscent of FreeBSD Jail or Linux OpenVZ.

Hyper-V containers provide an additional layer of isolation with Hyper-V. Each container is allocated its own kernel and memory, isolation is not carried out by the OS kernel, but by the Hyper-V hypervisor. The result is the same level of isolation as virtual machines, with less overhead than a VM, but more than a Windows container. To use this type of container, you need to install the Hyper-V role on the host. Windows containers are more suitable for use in a trusted environment, such as when applications from the same organization are running on a server. When a server is used by many companies and more isolation is needed, Hyper-V containers are likely to make more sense.

An important feature of containers in Win 2016 is that the type is not chosen at the time of creation, but at the time of deployment. That is, any container can be launched both as Windows and as Hyper-V.

In Win 2016, the Container Management stack abstraction layer is responsible for containers, which implements all the necessary functions. For storage, the VHDX hard disk image format is used. Containers, as in the case of Docker, are stored as images in the repository. Moreover, each saves not a complete set of data, but only the differences between the created image and the base image, and at the time of launch, all the necessary data is projected into memory. Virtual Switch is used to manage network traffic between the container and the physical network.

The OS in the container can be Server Core or Nano Server. The first, in general, is not a novelty for a long time and provides a high level of compatibility with existing applications. The second is an even more stripped-down version for running without a monitor, allowing you to run the server in the smallest possible configuration for use with Hyper-V, file server (SOFS), and cloud services. The graphical interface, of course, is missing. Contains only the most necessary components (.NET with CoreCLR, Hyper-V, Clustering, and so on). But in the end it takes up 93% less space, requires fewer critical fixes.

Another interesting point. To manage containers, in addition to the traditional PowerShell, you can also use Docker. And to make it possible to run non-native utilities on Win, MS partnered to extend the Docker API and toolset. All developments are open and available in the official GitHub of the Docker project. The Docker management commands apply to all containers, both Win and Linux. Although, of course, a container created on Linux cannot be launched on Windows (as well as vice versa). At the moment, PowerShell is limited in functionality and only allows you to work with a local repository.

Installing Containers

Azure has the required Windows Server 2016 Core with Containers Tech Preview 4 image that you can deploy and use to explore containers. Otherwise, you need to configure everything yourself. For a local installation, you need Win 2016, and since Hyper-V in Win 2016 supports nested virtualization (Nested virtualization), it can be either a physical or a virtual server. The process of installing the component is standard. Select the appropriate item in the Add Roles and Features Wizard or, using PowerShell, issue the command

PS>Install-WindowsFeature Containers

In the process, the Virtual Switch network controller will also be installed, it must be configured immediately, otherwise further actions will generate an error. We look at the names of network adapters:

PS> Get-NetAdapter

To work, we need a controller with type External. The New-VMSwitch cmdlet has a lot of parameters, but for the sake of example, let's get by with minimal settings:

PS> New-VMSwitch -Name External -NetAdapterName Ethernet0

We check:

PS> Get VMSwitch | where ($_.SwitchType –eq "External")

The Windows firewall will block connections to the container. Therefore, you need to create an allow rule, at least for the ability to connect remotely using PowerShell remoting, for this we will allow TCP / 80 and create a NAT rule:

PS> New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True PS> Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 - InternalIPAddress 192.168.1.2 -InternalPort 80 -ExternalPort 80

There is another simple deployment option. The developers have prepared a script that allows you to install all dependencies automatically and configure the host. You can use it if you wish. The parameters inside the script will help you understand all the mechanisms:

PS> https://aka.ms/tp4/Install-ContainerHost -OutFile C:\Install-ContainerHost.ps1 PS> C:\Install-ContainerHost.ps1

There is another option - to deploy a ready-made virtual machine with container support. To do this, on the same resource there is a script that automatically performs all the necessary operations. Detailed instructions are given on MSDN. Download and run the script:

PS> wget -uri https://aka.ms/tp4/New-ContainerHost -OutFile c:\New-ContainerHost.ps1 PS> C:\New-ContainerHost.ps1 –VmName WinContainer -WindowsImage ServerDatacenterCore

The name is arbitrary, and -WindowsImage indicates the type of image being built. Options might be NanoServer, ServerDatacenter. Docker is also installed immediately, the SkipDocker and IncludeDocker parameter is responsible for its absence or presence. After launch, the image will begin loading and converting, in the process you will need to specify a password to enter the VM. The ISO file itself is quite large, almost 5 GB. If the channel is slow, the file can be downloaded on another computer, then renamed to WindowsServerTP4 and copied to C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks . We can log in to the installed virtual machine by specifying the password set during assembly, and work.

Now you can go directly to the use of containers.

Using containers with PowerShell

The Containers module contains 32 PowerShell cmdlets, the documentation for some of which is still incomplete, although, in general, it is sufficient to make everything work. The list is easy to get:

PS> Get-Command -module Containers

You can get a list of available images using the Get-ContainerImage cmdlet, containers - Get-Container. In the case of a container, the Status column will show its current status: stopped or running. But while the technology is under development, MS has not provided a repository, and, as mentioned, while PowerShell is working with a local repository, so you will have to create it yourself for experiments.

So, we have a server with support, now we need the containers themselves. To do this, we set the package provider ContainerProvider.

Continued available to members only

Option 1. Join the "site" community to read all the materials on the site

Membership in the community during the specified period will give you access to ALL Hacker materials, increase your personal cumulative discount and allow you to accumulate a professional Xakep Score rating!

Containers in Microsoft Windows Server 2016 are an extension of the technology for customers. Microsoft is planning customer development, deployment, and now containerized application hosting as part of their development process.

As application deployment rates continue to increase and customers use application version deployments on a daily or even hourly basis, the ability to rapidly deploy developer keyboard validation applications to production is critical to business success. This process is accelerated by containers.

While VMs have the ability to move applications across datacenters and to and from the cloud, virtualization resources are further unlocked by containers using OS virtualization (System Software). This solution, thanks to virtualization, will allow for fast application delivery.

Windows Technology Container includes two different types of containers, Windows Server Container and Hyper-V Containers. Both types of containers are created, managed and function in the same way. They even produce and consume the same container image. They differ among themselves in the level of isolation created between the container, the host operating system and all other containers running on the host.

Windows Server Containers: Multiple container instances can run concurrently on a host with isolation provided through namespace, resource management, and process isolation technologies. Windows Server Containers share the same kernel that resides on the host.

Hyper-V Containers: Multiple container instances can run concurrently on a host. However, each container is implemented inside a dedicated virtual machine. This provides kernel-level isolation between each Hyper-V container and the host container.

Microsoft has included in the container feature a set of Docker tools to manage not only Linux containers, but also Windows Server and Hyper-V containers. As part of a collaboration within the Linux and Windows communities, the Docker experience has been extended by creating a PowerShell module for Docker, which is now open source for. The PowerShell module can manage Linux and Windows Sever containers locally or remotely using the Docker REST API. Developers are satisfied with innovating for customers through open source development of our platform. Going forward, we plan to bring technology to our customers along with innovations like Hyper-V.

Buy Windows Server 2016

We offer you to buy Windows Server 2016 at a discount from the official Microsoft Partner in Russia - DATASYSTEM Company. You will have the opportunity to get advice, as well as download Windows Server 2016 for free for testing by contacting our technical support specialists. Windows Server 2016 price on request. You can receive a commercial offer to participate in the purchase of Windows Server 2016 upon request by e-mail: