The VM is the part of virtualization software that creates and monitors the virtual

A host virtual machine is the server component of a virtual machine (VM), the underlying hardware that provides computing resources to support a particular guest virtual machine (guest VM). Together, the host virtual machine and guest virtual machine make up a virtual machine server.

The host virtual machine explained

A host VM is an important virtualization technique. It allows users to create a guest VM with a user operating system (OS), applications, compute resources, etc. Simply put, the host machine is the machine creating the guest VM.

The host VM requests its compute power from the host server. Its data is stored in a virtualization environment that is integrated with the virtualization manager application.

Every host machine is assigned infrastructure resources that are statically or dynamically scalable. It is also assigned security and statistical information about their performance and throughput. The virtualization server does these assignments to manage the operation of all host machines.

Multiple host VMs can be hosted on a single physical server. Moreover, each host VM on this single server can execute and operate without affecting the operations of the other host VMs on that server. Furthermore, because the environment is isolated from the rest of the system, anything running inside the VM doesn't interfere with anything else running on the host hardware.

The host virtual machine (VM) server component -- a physical machine -- of the virtual machine in a virtual architecture.

Host virtual machine vs. guest virtual machine

The host VM and guest VM are the two components of a VM. The host VM is hosted and operated from a remote cloud server. Its functionalities are similar to those of a guest VM. However, the host VM is accessed differently from a guest VM.

The guest VM is an independent instance of an OS and associated software and information. It is also known as a guest computer, guest OS or, simply, guest.

The host VM is the physical machine that provides the guest VM with computing hardware resources, such as:

  • Processing power
  • Memory
  • Disk
  • Network I/O (input/output)

A guest VM can exist on a single physical machine but is usually distributed across multiple hosts for load balancing. Thus, one or more virtual guest machines can run on a physical host machine. It can also be moved between host servers to accommodate demand fluctuations or use resources more efficiently. Similarly, a host VM may exist as part of the resources of a single physical machine or as smaller parts of the resources of multiple physical machines.

A virtual machine monitor (VMM), or hypervisor, intermediates between the host and guest VM. Moreover, it isolates individual guest VMs from one another and makes it possible for a host to support multiple guests running different operating systems.

Architecture of the host VM

The architecture of a host VM includes the following components:

  • the hardware layer or the host VM layer;
  • the OS and hypervisor layer acting as the intermediary; and
  • the VM guest layer along with host utilities.

In the hardware layer, the VM contains the physical network interface cards (NICs), memory, CPU and Ethernet management port. The aim of NICs is to support hardware virtualization based on SR-IOV (single root I/O virtualization), an approach in which virtual functions are managed by the guest OS and the host VM manages physical NICs.

Setting up a virtual machine server or the host VM

Similar to physical machines, each VM will need RAM and CPU. In addition, the host server also requires sufficient storage. Setting up a virtual machine server is as follows:

  1. Ensure sufficient RAM (as much fast RAM as one can get) and CPU with as many cores as possible.
  2. Consider Serial ATA (SATA) or Serial Attached SCSI (SAS) drives to store virtual server images.
  3. Use virtualization software (hypervisors), such as Hyper-V, VMware or Citrix XenServer to set up the VM server.
  4. Choose the right network connection you want to set up for the VM, the guest OS you want to run and the location where the VM's files will be stored.

See also: Effectively balance VM performance with VM density and cost, containers vs. VMs: what are the key differences?, understand hardware support for virtualization and what to know for your virtualized storage selection process. Explore: Get familiar with storage virtualization software options, learn the basics of virtual server security software selection and 6 virtual server management best practices.

There are two main hypervisor types, referred to as “Type 1” (or “bare metal”) and “Type 2” (or “hosted”). A type 1 hypervisor acts like a lightweight operating system and runs directly on the host’s hardware, while a type 2 hypervisor runs as a software layer on an operating system, like other computer programs. 

The most commonly deployed type of hypervisor is the type 1 or bare-metal hypervisor, where virtualization software is installed directly on the hardware where the operating system is normally installed. Because bare-metal hypervisors are isolated from the attack-prone operating system, they are extremely secure. In addition, they generally perform better and more efficiently than hosted hypervisors. For these reasons, most enterprise companies choose bare-metal hypervisors for data center computing needs. 

While bare-metal hypervisors run directly on the computing hardware, hosted hypervisors run on top of the operating system (OS) of the host machine. Although hosted hypervisors run within the OS, additional (and different) operating systems can be installed on top of the hypervisor. The downside of hosted hypervisors is that latency is higher than bare-metal hypervisors. This is because communication between the hardware and the hypervisor must pass through the extra layer of the OS. Hosted hypervisors are sometimes known as client hypervisors because they are most often used with end users and software testing, where higher latency is less of a concern. 

Hardware acceleration technology can create and manage virtual resources faster by boosting processing speed for both bare-metal and hosted hypervisors. A type of hardware accelerator known as a virtual Dedicated Graphics Accelerator (vDGA) takes care of sending and refreshing high-end 3-D graphics. This frees up the main system for other tasks and greatly increases the display speed of images. For industries such as oil and gas exploration, where there is a need to quickly visualize complex data, this technology can be very useful.

Both types of hypervisors can run multiple virtual servers for multiple tenants on one physical machine. Public cloud service providers lease server space on the different virtual servers to different companies. One server might host several virtual servers that are all running workloads for different companies. This type of resource sharing can result in a “noisy neighbor” effect, when one of the tenants runs a large workload that interferes with the server performance for other tenants. It also poses more of a security risk than using a dedicated bare-metal server. 

A bare-metal server that a single company has full control over will always provide higher performance than a virtual server that is sharing a physical server’s bandwidth, memory and processing power with other virtual servers. The hardware for bare-metal servers can also be optimized to increase performance, which is not the case with shared public servers. Businesses that need to comply with regulations that require physical separation of resources will need to use their own bare-metal servers that do not share resources with other tenants.

Like virtual machines, container technology such as Kubernetes is similar in the sense of running isolated applications on a single platform. While virtual machines virtualize the hardware layer to create a “computer,” containers package up just a single app along with its dependencies. Virtual machines are often managed by a hypervisor, whereas container systems provide shared operating system services from the underlying host and isolate the applications using virtual-memory hardware.  

A key benefit of containers is that they have less overhead compared to virtual machines. Containers include only the binaries, libraries and other required dependencies, and the application. Containers that are on the same host share the same operating system kernel, making containers much smaller than virtual machines. As a result, containers boot faster, maximize server resources, and make delivering applications easier. Containers have become popluar for use cases such as web applications, DevOps testing, microservices and maximizing the number of apps that can be deployed per server.  

Virtual machines are larger and slower to boot than containers. They are logically isolated from one another, with their own operating system kernel, and offer the benefits of a completely separate operating system. Virtual machines are best for running multiple applications together, monolithic applications, isolation between apps, and for legacy apps running on older operating systems. Containers and virtual machines may also be used together.