Hyper-V Management: Mistakes You’ll Never Make Again Part 1

July 30, 2018 Off By David

Virtualization technology is complex, which means the learning curve for hyper-v management (Hyper-V 101) can be quite steep for administrators. In this article, we review a number of common mistakes that Hyper-V administrators struggle with in their day-to-day operations. We also suggest remedies and optimization techniques to help you significantly improve your Hyper-V environment.

Mistake #1 – Imbalance of Memory and CPUs

 

It’s nearly certain that you’ll exhaust memory resources long before pegging your CPUs. Don’t be that person who buys a 24-core CPU box with only 64GB of RAM. Remember, you can’t share memory. The memory that each VM consumes cannot be used by another VM, but CPU resources are shareable. CPU cores aren’t dedicated, so multiple cores can quite nicely manage loading from multiple VMs.

If practical, run Performance Monitor traces against the systems that you want to place into your new virtual environment. Talk to experts, read benchmarking literature and connect with the manufacturers of the applications you’ll be virtualizing. Also, keep in mind that Microsoft recommends no more than 12 virtual CPUs per physical core.

Mistake #2 – Imbalance of Network Throughput with Storage Capacity

It’s common for folks to significantly overestimate their storage speed requirements, while also overestimating actual performance of storage systems. Popular tech wisdom maintains that RAID-10 is the fastest RAID build, so they put maybe 8 disks in a RAID-10 array and then grossly overspend to connect with dual 10GbE adapters. While it’s workable, it’s a lopsided configuration because the 10GbE pair is terribly underutilized.

It’s much more important to focus on proper disk sizing over CPU because storage costs tend to escalate fast. Take time to estimate your actual I/O requirements-in advance. If you’re unsure, keep in mind that a database executing several hundred transactions per minute needs high I/O throughput. Most other applications don’t. While your total disk requirements might be substantial, 14 VMs that each average a load of 30 IOPS won’t come close to stressing four disks that spin at 15K RPM.

Mistake #3 – Imbalance of SSD and Spinning Disks

SSD is fantastic for virtualization loads: blazing fast and low latency eliminates performance concerns for environments that exhibit constant, disparate access to multiple virtual machines. But you’ve got to pay to play with SSD. A good compromise is to configure a hybrid SSD-disk array.

Unfortunately, these SSDs are often used inappropriately in virtualized environments. In nearly all cases, it’s bad practice to install Hyper-V Server on SSDs and put the VMs on spinning disks. After startup, the Hyper-V Server will churn the disk incessantly for a while, but then it will taper to idle. It’s your virtual machines that need most of the disk I/O, so put your SSDs to work on holding the VM data.

Mistake #4 – Imbalance of Networking Resources

Long ago-in tech time-you might’ve built a standalone Hyper-V host with two network cards. A former best practice was to dedicate one NiC to the management OS and allocate all the VMs to the other. For 2008 R2 and prior releases, you had to go with this configuration or otherwise risk going off-support and make a manufacturer team. Today, with the availability of native teaming-there’s a better option.

Another common configuration is one in which the VMs are given one or two physical adapters while another physical adapter handles only CSV traffic. This is no longer necessary, either. Beginning with 2012 R2, there’s no need for a dedicated CSV network-if you ensure adequate bandwidth for all cluster communications.

It’s all too easy to mangle Hyper-V networking. You’ll do well if you grab some coffee and think carefully to provide adequate bandwidth to your virtualized applications. Then, configure everything else in subordination to that critical objective.

Mistake #5 – Improper Focus of Resources

No matter how insistent they may be, don’t listen to anyone who tells you to use fixed VHDXs for everything. It’s awful advice. OK, sure, they are marginally faster, and you won’t need to worry about accidental storage over-provisioning. But it’s wasteful. Some simple math will keep you from over-provisioning and reclaim plenty of disk space. If you provision each VM with a C: drive for the guest OS and other VHDX files for data, then that C: drive will be optimal. Set it to, say, 60 GB, and it will likely stay well under 40 GB forever. If you’ve got 20 virtual machines, that’s a minimum of 400 GB of savings. If, instead, you insist on using fixed VHDX, you now know that there will be a minimum of 400 GB of disk space waste.

This is only one example, but you can apply similar logic to other aspects-prior to designing your virtualization deployment. Think carefully, determine where the bottlenecks are likely to be, and focus there.

Mistake #6 – Creating Excessive Networks and Virtual Adapters

In a standalone Hyper-V system, the management OS needs to reside in only one network-using a single IP. Preferably, it will have gateway access, but your security requirements may make this optional. The IP should have an entry in your DNS so it is reachable by name. It’s important to realize that this IP need not have anything in common with any guest, whatsoever. It can be on a different subnet or VLAN. Also, it’s unnecessary to create an IP for the management OS in each VLAN/subnet that your guests are using. Don’t create a virtual adapter for any virtual machine traffic, since your guests won’t be routing traffic through the management OS. Of course, you may need to configure network connectivity for the management OS to reach iSCSI resources.

Mistake #7 – Creating Excessive Virtual Switches

It’s likely that you only need a single virtual switch. There are only a few contexts for employing multiple virtual switches, as might be the case in which some guests need to be in a DMZ and the others need to remain on corporate network. For the DMZ guests, it’s good practice to use dedicated physical switches together with a dedicated virtual switch. In most cases, VLANs or network virtualization will probably provide adequate isolation.

Mistake #8 – Attempting to Optimize Page Files

A page file gives applications access to more physical memory than is available to the OS. The OS tends to reserve memory that goes mostly unused. If your page file has such a performance dependency that you think too much about its location-it’s clear that the page file memory is in regular use. This means that something went screwy way upstream, or some other condition changed. Keep in mind that a hypervisor page file will have near-zero usage, so you only need to ensure there’s adequate space for it.

##

Originally published at 5nine.

About the Author

Robert Corradini is a two-time Microsoft Cloud and Datacenter MVP with over 20 years of experience managing cloud and datacenter technologies. He is currently the Director of Product Management at 5nine and focuses on bringing world-class cloud security and management solutions to market.