With the release of Windows 2012, Hyper-V 3.0, and the new System Center Suites, Microsoft truly has a developed a hypervisor solution which can rival the entire feature set of VMWare's vSphere platform. I tend to prefer and recommend Hyper-V over VMWare for a few reasons, with the prominent reasons being cost and availability. It’s true that in the past Hyper-V's feature set was not as strong as VMWare's, but for most SMB's the feature set was indeed enough to satisfy their business and technical requirements. The cost of VMWare could often only be justified in larger organizations with bigger IT budgets and more demanding requirements.
Things have now changed. The gap between Hyper-V and VMWare has slimmed to a point where it’s now virtually non-existent. True, VMWare provides a solid and mature hypervisor platform, but many organizations are now seriously considering Hyper-V over VMWare due to its new bolstered feature set, low cost and overall ubiquity.
One of the newest features in Hyper-V 3.0 that I've worked with lately is Live Migration. For those who are familiar with Hyper-V Clusters, this concept is not new since Live Migration is one of the pinnacle reasons to implement a Hyper-V Cluster. For those not familiar with clusters, there should be some clarification as to what the new Live Migration feature entails in a typical setup. That is, a setup with a few standalone Hyper-V hosts and no shared storage.
When you’re working in a cluster and initiate a Live Migration, ownership of the virtual machine and its working memory are simply transferred from one node to another (see: http://trigon.com/tech-blog/bid/35259/Microsoft-Virtualization-Part-2-Live-Migration). Depending on the RAM consumption of the VM, this could take seconds, or just a few minutes. With the use of shared storage, the VHD's don't have to move, so basically its quick process with no downtime incurred.
In order to avoid confusion, a distinction should be made between a Live Migration in a clustered/shared storage environment and a Live Migration in a standalone environment. In Hyper-V 3.0, and in our "typical" scenario, a Live Migration is essentially a Live Storage Migration. This Live Migration does indeed keep the virtual machine online, but it copies the entire VHD and VM config files to the new host, not just ownership of the VM and the working memory state. It is a much more time consuming process but by allowing administrators to perform this type of Live Migration directly from Hyper-V Manager adds a great deal of flexibility without the need for an expensive or complex infrastructure.
It is important to note that Live Migration in a “typical” scenario isn't very scalable. Even if you have a dedicated 10 Gb Live Migration network, if you have 30 VMs you want to migrate, that is at least 30 VHD's that have to be fully copied over the network. Yes, you could schedule them through PowerShell and get it done, but it could quickly become a cumbersome process. The moral of the story is to just ensure you do not confuse the capabilities and function of a Live Migration in differing scenarios. Also, having Live Migration capabilities at your disposal right out of the box does not negate the need for a Hyper-V Cluster in order to provide High Availability for your virtual machines.
There are many obvious benefits to server virtualization and consolidation. This technology allows mid-sized companies in the Philadelphia and Central Pennsylvania region to maximum the investment in their current hardware. It also reduces energy consumption and cooling needs in the data center. Cost savings is the most common effect of virtualization.
However, a very welcome byproduct of the virtualization trend, whether it’s Hyper-V, VMWare or another technology, is disaster recovery capability. This benefit was historically reserved for large enterprise environments. Due to lowered costs of storage, the purchase of Storage Area Networks (SANs) is no longer out of reach for mid-sized businesses. Microsoft Hyper-V 2012 has also made disaster recovery more attainable for the mid-market. It appears that Microsoft has paid attention to their customers’ needs with the recent development of Hyper-V Replica.
Microsoft has the built in the ability to create an exact replica of your server environment to separate hardware. The hardware can be placed onsite or, ideally, at another location in order to provide for “true” disaster recovery. This functionality is included with Windows Server 2012 so no additional software cost is incurred.
Some of the shining features of Hyper-V Replica are as follows:
- Not dependent on server hardware
- Built for standard WAN links
- Works with clusters
- Provides failover broker
- Asynchronous replication
- Point in time failover
These features allow flexibility in how a disaster recovery solution is designed. The ability to failover between hosts is no longer dependent on identical hardware. Failover can be enabled to much less expensive machines with slower disks, which is optimal for offsite disaster recovery to another location. Built in compression and asynchronous replication allows the data replication to work over standard and readily available WAN links. As a result, expensive highly-provisioned bandwidth links are not required to handle the replication. A replication and failover broker handles the tasks of failing over to another Hyper-V server dynamically (i.e. without user intervention) in the event of a disaster.
All of the aforementioned features use to be out of reach for mid-sized businesses due to prohibitive costs. Until the latest Hyper-V 2012 release, a company would have been required to purchase a costly SAN in both locations and pay for advanced implementation services in order to configure replication and failover in the case of a disaster. With the addition of the Hyper-V Replica capability in the new version, two SANs are not required. A single server with sufficient disk space could function as the recovery server at the remote data center. After initial seeding of data between sites, only changes in data need to be replicated across the WAN. Realistically, your business could be up and running in a few minutes, instead of days or weeks, if your primary site is lost.
Your organization may already have all of the components required in order to implement a full disaster recovery solution. Trigon can provide a brief analysis of your environment, develop a plan and implement a Hyper-V Replica solution to protect your organization from a disaster. If you are in the Philadelphia or Central PA region, contact us today to find out how we can assist you!
- by Jon, "GingerBread", Pentecost
In my first blog about virtualizing your small business, I talked about consolidating your physical hardware from a larger number of servers to only a few servers, reducing energy costs to run the smaller number of servers, and easily adding storage space. In this blog about virtualizing your small business, I am going to review some of the other benefits.
The ability to move a virtual server to another host with minimal down-time is another great reason to virtualize. Let’s say you had the scenario of two host servers that both have Hyper-V installed on them and one of the host servers had a VM on it, but that host server needed to have repairs or maintenance completed on it. You can simply shut down the VM, export the configuration of the VM, move the exported configuration file and .vhd files to the other host, import the configuration file with the .vhd files in the correct location, and then startup the VM while the other host server is offline for repairs or maintenance.
Better yet, why not have a failover cluster of the host servers with shared storage? Then you would be able to live migrate the server without even needing to shut down the VM and have any downtime. You can then complete the repairs or maintenance on the one host with no interruption at all of any network services.
You can also setup virtual workstations for applications that always need to be running on a workstation that is logged in as a particular user. Certain applications always need to be run with the workstation logged in, so rather than dedicating a workstation just for that purpose, you can setup a virtual workstation and deploy the physical workstations to actual users. This would reduce the number of physical workstations you need to have running (again, think of the energy costs you can save by less machines running) and would still have the ability to connect to the workstation in the same way you would a workstation in the server room that is only connected to power and a network cord.
Contact us to talk about our Philadelphia Area Information Technology Solutions or our Virtualization Solutions for your small and medium size businesses.
- by Dan "I make 'A Christmas Carol' References Look Easy", Rodden
What is my biggest pet peeve in Information Technology? Hardware dependence. Hardware dependence is an extremely intrusive variable that is really so integral to systems that it makes it difficult to work around it when planning or fixing a network. Hardware dependence is the reason that you still have a Windows NT 4.0 workstation operating the plotter. Hardware dependence is the reason your old fax server hasn’t been decommissioned or the reason why you are still using server that requires a physical piece of equipment be installed to operate your Line of Business application. These restrictions weigh us down like the chains of Jacob Marley, forcing compromises in new business developments and abandonments of updates that will improve efficiency and reduce work.
Well, the future is now, and it is time to bask in the glory of the physically-independent virtual world. With products like Microsoft Hyper-V, not only can you maximize your hardware investments, but you can rest easy knowing that if something happens to that hardware it is easily replaced with completely different hardware and your virtual infrastructure won’t know the difference once it is booted again. This makes the potential for disaster recovery very real, because being that the systems are already virtualized, they are very easily transferrable to a remote location, and the time spent making them bootable in the new environment is severely minimized if not eliminated.
In addition to the disaster recovery potential, virtualizing your infrastructure also affords the benefit of maximizing your resource utilization. When you virtualize your servers, you can get the most out of your hardware by utilizing the necessary hardware resources with technology like dynamic disks and dynamic memory, which provide the virtual machines with more hard drive space, more RAM as needed. This allows you to operate your infrastructure at both the minimum and the recommended requirements, because your servers will be dynamically adjusting their resources as needed, allowing the virtual servers to “trade” resource allocation when demand between them fluctuates.
What are you waiting for? Make everyone’s life easier and get the most out of your investments – start virtualizing.
- by Jon, "Gingerbread", Pentecost
In the past, each server would be on its own box because hardware was cheap and there weren’t too many technologies around that would efficiently run multiple servers on a single system. You were able to have three or four servers, each doing a separate job (one for Active Directory, DNS, DHCP, another for E-mail, and a third for doing utility/maintenance or specialized task like database).
In today’s environment, you could have only two servers that run three virtual systems each that are replicating, as well as a fourth server that you can use for testing up-and-coming technologies without the need of purchasing additional hardware. Let’s explore just one example of how things used to be and how things are today.
Let’s say that you have your typical three servers. There is a problem and the motherboard on your main AD DS server gets fried. Depending on your configuration, there may be minimal issues with users logging in if one of your other servers also has AD running but all of their files are stored on the drives in the main server – big problem. You have to wait until you can get a replacement motherboard and install it as soon as it arrives – a day and a half minimum of downtime.
Now use the same scenario, except that you have two Hyper-V servers and all three of your servers are virtualized, properly replicating between Hyper-V hosts prior to the motherboard failure. Only a few system changes to point to alternate locations (at most, as in a complete replication scenario using DFS and Exchange DAG, there is complete fail-over capabilities) and users are back up in minutes. You can now casually order and install the motherboard in the failed server, as there is little downtime for your users.
Now let’s go over a couple environmental issues (especially with “Going Green” being a big push now). How much power are all three or four of those servers using to be on, let alone the battery backup units to power them and the air conditioning to keep them cool? Using only two servers, not as much power is used to have them running, not as much heat is generated, therefore less air conditioning is needed. Most likely, the servers are older too and are not running as efficiently as a new server would be. Need I go on?
One other great benefit of virtualizing your environment is that if you need to add storage space, it is much easier expanding the size of a .vhd, as opposed to expanding out a RAID configuration. Just add another drive or two to the host and then add that space to the .vhd of the server that is running out.
There are many more benefits that we don’t have time to go into, but if you are thinking of replacing at least one of your servers, maybe now is the time to start thinking of going virtual? Contact us to review our Small Business Solutions for Philadelphia or talk with us about our Philadelphia Area Managed IT Services Programs.
- by Andrew, "I've Never Met A Contest I Didn't Like", Levin
The release of Windows Server 2008 R2 SP1 has added a great new feature to the Hyper-V role simply called "Dynamic Memory." This feature will allow your virtual machines to use memory much more efficiently than ever before. Prior to this release, Hyper-V hosts did not support any type of memory oversubscription for your VMs. Basically, if you had 12 GBs of RAM (excluding the host reserve) with 3 VMs requiring 4 GBs each, you were strictly limited to 3 VMs on that host. Now, say those VMs only ever tacked out at 3 GBs each, or only needed 4 GBs on very rare occasions. Either way, you would have 3 GBs of wasted RAM. Processor oversubscription was always possible, but memory was always a distinct and finite resource.
I'd have to say when I first saw Dynamic Memory in action I was very impressed. This is because when I first heard about its pending release a few months ago, I assumed it just meant you could manually add or remove assigned memory to a VM without shutting it down. So, if you saw your VM was tacking out at 3 GBs, you could simply provision it another GB. This assumption was incorrect, and as it happens to be, the Dynamic Memory feature in SP1 goes much further than this. The main efficiency gainer here is that Hyper-V can now automatically allocate memory to VMs on an as needed basis. Therefore, your 12 GB host will provision memory to its VMs based off of the demand of the workloads and applications running within the VM. You can set buffers and quotas for each VM but end result is that memory is provisioned and used only when memory is needed. This completely eliminates the need to assign, and waste, large amounts of memory when planning for potential peak usage.
We all know that virtualization can already substantially decrease the amount of physical servers required in your datacenter, and now Dynamic Memory can help to improve upon that consolidation ratio even more. Gaining efficiencies is the number one goal of virtualization and dynamic memory is just another one of those aspects that helps break your environment away from physical barriers. And, as always, Trigon is here to help you get the most out of you IT investment, so give us a call today!
Over the last few years server virtualization within enterprise network environments has been quickly gaining popularity. Harnessing the power of virtualization creates truly dynamic datacenters which can effectively respond to an organization’s needs. In response to the desire for greater flexibility and agility, Microsoft has added Live Migration to the R2 release of Windows Server 2008. Live Migration essentially allows an administrator to transfer a running virtual machine from one physical host to another physical host with no perceived downtime.
In order to take advantage of the Live Migration feature, there are a few prerequisites. First off, your organization needs to implement some form of shared storage, i.e. an iSCSI or Fibre Channel SAN, in order to store your virtual machines files. This is important because shared storage facilitates the ability for Live Migration to only transfer the memory state and ownership of the target VM, as opposed to an entire VHD file.
The next step is to configure the Failover Clustering feature. Failover Clustering can be configured at the host or application level. A Hyper-V host level failover cluster means that the entire VMs themselves are made highly available, as opposed to just the application they host. Guest OS failover clusters between VMs are used to maintain the high availability of applications within the VM, like SQL or Exchange for example. In our scenario, we are going for a host level cluster because we want to transfer the entire VM, not just a single service.
The way Live Migration actually works is pretty interesting. Once your VMs are configured to be highly available, Live Migration can be initiated from the Failover Cluster Manager. Once invoked, the memory pages of the target VM will begin to be copied and transferred from the source host to the destination host. However, one significant flaw in this process is that as the memory pages are being copied, the VM is still running, and thus, still modifying its own memory state. To combat this issue, all changes to the memory state are tracked during the migration process and memory pages that have been modified are categorized as “dirty pages.” Therefore, copying the VM memory state must become an iterative process, which is exactly the case. The logic behind this is that through each iteration, the amount of dirty pages which must be copied from the target will continue to decrease, eventually reaching a point where the entire working memory state of the target VM is located on the destination host.
Now, here is where things get really clever. During the iterations, the hosts are constantly computing the amount of remaining dirty pages left on the source. They also remain cognizant of the negotiated TCP timeout interval between each other, and other network traffic. Once they know the amount of remaining dirty pages is small enough to be transferred to the destination host under the TCP timeout interval, several actions are performed:
1) The target VM is paused
2) The remaining dirty pages are transferred to the destination host
3) Ownership of the VM (on the SAN) is transferred from the source to the destination host
4) ARP packets update the switching tables
5) The VM is un-paused on the destination host
All of this happens so quickly that any services being accessed over TCP will not even notice the transfer. And even if they do, all that would be required is the re-transmission of maybe a single packet. Pretty cool, huh?
So, what benefit does this provide? Well first off, planned downtime of a physical host can be a thing of the past. Need to add more RAM, swap out a processor, patch and reboot the host? Not a problem, simply Live Migrate your VMs to another host and do what you got to do. Your users will not even notice a hiccup. An even more powerful benefit Live Migration provides is the ability to dynamically transfer virtual machines to different hosts, sites or environments based off of service demand, or even imminent failure. The ability for an administrator to proactively respond to significant network events is extremely critical, therefore, System Center Operations Manager has a feature called PRO (Performance and Resource Optimization), which will integrate with System Center Virtual Machine Manager to automate the entire dynamic re-provisioning process. Moreover, SCVMM also utilizes intelligent placement algorithms which can find the best candidate to transfer a workload to, based off of past and projected metrics.
We have been hearing the term “Dynamic Data Center” for quite some time now and capabilities such as Live Migration truly help bring that notion into reality. I’m sure any administrator would love to have this feature at their disposal.
If you are planning on virtualizing servers in your environment you have probably considered physical to virtual migrations. If you are unfamiliar with this concept, to put it simply, it is the process of converting a server’s OS, programs, and data running on its own physical hardware to virtual instance hosted on a virtual platform. Microsoft Hyper-V provides the technology needed to virtualize one or many physical servers currently running in your environment. In this first of many Virtualization Blogs, I’d like to discuss a few different options when considering physical to virtual migrations.
Option 1 – Disk Imaging
A third party imaging utility such as Symantec Ghost would be used to take a Ghost image of a physical server. The Ghost image would then be deployed to a newly configured virtual machine created within Hyper-V. The problem associated with this option is dissimilar hardware, which is a common problem with all disk imaging. When hardware, such as disk controllers and processors, changes drastically between physical servers, the image is more likely to fail and not boot at all. So option one should only be used if you are an experienced IT support professional.
Option 2 – Hard disk conversion
The 2nd option to consider is to use a utility to convert your physical hard disks to VHD files. Sysinternals has a tool called Disk2vhd that will convert your physical hard disks to VHD files for use with Virtual PC or Hyper-V. This option is fairly straight forward. Once you create your VHD files, you simply import them into a new virtual machine.
Option 3 – System Center Virtual Machine Manager
The 3rd and most preferred option is to use Microsoft System Center Virtual Machine Manager. SCVMM has a built in option to convert a physical server to a virtual server, all with the click of a button. One of the greatest features of this option is that you can run a P2V routine while the physical server is still online.
If your business is interested in reducing your server footprint through virtualization or is interested in implementing virtualization technology, contact Trigon today at 1-888-494-TRIGON or by email at solutions@TrigonIT.com.