With the release of Windows 2012, Hyper-V 3.0, and the new System Center Suites, Microsoft truly has a developed a hypervisor solution which can rival the entire feature set of VMWare's vSphere platform. I tend to prefer and recommend Hyper-V over VMWare for a few reasons, with the prominent reasons being cost and availability. It’s true that in the past Hyper-V's feature set was not as strong as VMWare's, but for most SMB's the feature set was indeed enough to satisfy their business and technical requirements. The cost of VMWare could often only be justified in larger organizations with bigger IT budgets and more demanding requirements.
Things have now changed. The gap between Hyper-V and VMWare has slimmed to a point where it’s now virtually non-existent. True, VMWare provides a solid and mature hypervisor platform, but many organizations are now seriously considering Hyper-V over VMWare due to its new bolstered feature set, low cost and overall ubiquity.
One of the newest features in Hyper-V 3.0 that I've worked with lately is Live Migration. For those who are familiar with Hyper-V Clusters, this concept is not new since Live Migration is one of the pinnacle reasons to implement a Hyper-V Cluster. For those not familiar with clusters, there should be some clarification as to what the new Live Migration feature entails in a typical setup. That is, a setup with a few standalone Hyper-V hosts and no shared storage.
When you’re working in a cluster and initiate a Live Migration, ownership of the virtual machine and its working memory are simply transferred from one node to another (see: http://trigon.com/tech-blog/bid/35259/Microsoft-Virtualization-Part-2-Live-Migration). Depending on the RAM consumption of the VM, this could take seconds, or just a few minutes. With the use of shared storage, the VHD's don't have to move, so basically its quick process with no downtime incurred.
In order to avoid confusion, a distinction should be made between a Live Migration in a clustered/shared storage environment and a Live Migration in a standalone environment. In Hyper-V 3.0, and in our "typical" scenario, a Live Migration is essentially a Live Storage Migration. This Live Migration does indeed keep the virtual machine online, but it copies the entire VHD and VM config files to the new host, not just ownership of the VM and the working memory state. It is a much more time consuming process but by allowing administrators to perform this type of Live Migration directly from Hyper-V Manager adds a great deal of flexibility without the need for an expensive or complex infrastructure.
It is important to note that Live Migration in a “typical” scenario isn't very scalable. Even if you have a dedicated 10 Gb Live Migration network, if you have 30 VMs you want to migrate, that is at least 30 VHD's that have to be fully copied over the network. Yes, you could schedule them through PowerShell and get it done, but it could quickly become a cumbersome process. The moral of the story is to just ensure you do not confuse the capabilities and function of a Live Migration in differing scenarios. Also, having Live Migration capabilities at your disposal right out of the box does not negate the need for a Hyper-V Cluster in order to provide High Availability for your virtual machines.
As we close the chapter on 2012 and start anew, let’s take a look back on a tumultuous year that included a devastating natural disaster, a presidential election and a fiscal cliff crisis. Let’s explore how these national incidents will impact technology and how they affect local businesses in the Philadelphia and Central Pennsylvania region.
Technology Trends for 2013
Whether you believe in global warming or not, it is not debatable that we witnessed the most extreme weather in recent history. Every year, the United States has its share of tornadoes, severe thunderstorms, heat waves and droughts. However, this past year has seen an inordinate number of extraordinary weather events:
Hurricane Sandy – this storm wreaked unseen havoc and destruction in parts of the Northeast
Heat - 2012 has officially been named the hottest year since modern record-keeping began in 1895. This has resulted in power outages and damaged crops
Severe windstorm in Washington D.C. – A summer storm has introduced the term “derecho” to the common vernacular and paralyzed the nation’s capital for over a week
Wildfires – 2012 is officially the worst season on record for wildfires, especially in the West. Over nine million acres have been burned, which is roughly the equivalent of Massachusetts and Connecticut combined
While it’s impossible to predict the weather trends in 2013, why bother taking a chance? 40% of businesses do not open again after they incur a disaster. This obviously affects small and mid-sized businesses more than very large enterprise environments. Trigon has a firm commitment to its clients and new prospects to educate them on the benefits and costs of having a disaster recovery / business continuity strategy. As with all good strategies, we recommend a layered approach that is comprised of the following components:
A detailed Disaster Recovery
and/or Business Continuity Plan which outlines the definitive steps to take if an incident occurs
Proper air conditioning in data centers and server closets
Redundant Internet providers that are configured to failover if one should go down due to stormy conditions
Review cloud computing options like Office365 and Google Apps as these solutions are typically more resilient to outages
A well planned backup scheme with offsite replication
Annual test of Disaster Recovery plan, which includes restoring data and testing functionality
As the economy continues to sputter, taxes are seemingly on the rise and fears of the fiscal cliff still loom, companies are reluctant to invest in technology and make capital expenditures. In times like these, organizations are rightfully looking to get more, for less.Without getting into the technical specifications, this concept of getting more for less is exactly what virtualization is all about. At its essence, virtualization allows your company to leverage its hardware investment and efficiently allocate resources where they are needed most. When approaching clients about upgrades, Trigon ensures that our clients are aware of the benefits, costs and risks of virtualization. This emerging trend is only getting stronger and it’s imperative that business owners and decision makers get acquainted with its associated advantages.
The term, cloud computing, is widely discussed and is a common buzzword on the tip of every CIO’s tongue… and rightfully so! New computing technologies like tablets and mobile devices, higher bandwidth availability and lower storage costs are making cloud technologies more viable for clients of all sizes, location and industry. It should be noted that cloud computing is not a “one size fits all” solution that will solve your IT woes. Careful consideration of application compatibility, data security and cost should take place whenever data or a critical services will be taken off premise. Common services that Trigon can evaluate for the cloud are as follows:
Anti-SPAM / Email
Productivity Software (Ex. Office365 and Google Apps)
Virtualized desktop and servers
Critical applications (Ex. ERP or CRM systems)
As we usher in 2013, Trigon will be reaching out to all of their prospects and clients to discuss these important trends. In these uncertain times, it’s never been truer that with Trigon, you can simply “Expect IT To Happen”. Please feel free to contact us directly to discuss your 2013 plans and how you can make more with less in the upcoming year. We can be reached via phone at 484-323-5004 or email at firstname.lastname@example.org.
- by Jon, "GingerBread", Pentecost
In my first blog about virtualizing your small business, I talked about consolidating your physical hardware from a larger number of servers to only a few servers, reducing energy costs to run the smaller number of servers, and easily adding storage space. In this blog about virtualizing your small business, I am going to review some of the other benefits.
The ability to move a virtual server to another host with minimal down-time is another great reason to virtualize. Let’s say you had the scenario of two host servers that both have Hyper-V installed on them and one of the host servers had a VM on it, but that host server needed to have repairs or maintenance completed on it. You can simply shut down the VM, export the configuration of the VM, move the exported configuration file and .vhd files to the other host, import the configuration file with the .vhd files in the correct location, and then startup the VM while the other host server is offline for repairs or maintenance.
Better yet, why not have a failover cluster of the host servers with shared storage? Then you would be able to live migrate the server without even needing to shut down the VM and have any downtime. You can then complete the repairs or maintenance on the one host with no interruption at all of any network services.
You can also setup virtual workstations for applications that always need to be running on a workstation that is logged in as a particular user. Certain applications always need to be run with the workstation logged in, so rather than dedicating a workstation just for that purpose, you can setup a virtual workstation and deploy the physical workstations to actual users. This would reduce the number of physical workstations you need to have running (again, think of the energy costs you can save by less machines running) and would still have the ability to connect to the workstation in the same way you would a workstation in the server room that is only connected to power and a network cord.
Contact us to talk about our Philadelphia Area Information Technology Solutions or our Virtualization Solutions for your small and medium size businesses.
- by Dan "I make 'A Christmas Carol' References Look Easy", Rodden
What is my biggest pet peeve in Information Technology? Hardware dependence. Hardware dependence is an extremely intrusive variable that is really so integral to systems that it makes it difficult to work around it when planning or fixing a network. Hardware dependence is the reason that you still have a Windows NT 4.0 workstation operating the plotter. Hardware dependence is the reason your old fax server hasn’t been decommissioned or the reason why you are still using server that requires a physical piece of equipment be installed to operate your Line of Business application. These restrictions weigh us down like the chains of Jacob Marley, forcing compromises in new business developments and abandonments of updates that will improve efficiency and reduce work.
Well, the future is now, and it is time to bask in the glory of the physically-independent virtual world. With products like Microsoft Hyper-V, not only can you maximize your hardware investments, but you can rest easy knowing that if something happens to that hardware it is easily replaced with completely different hardware and your virtual infrastructure won’t know the difference once it is booted again. This makes the potential for disaster recovery very real, because being that the systems are already virtualized, they are very easily transferrable to a remote location, and the time spent making them bootable in the new environment is severely minimized if not eliminated.
In addition to the disaster recovery potential, virtualizing your infrastructure also affords the benefit of maximizing your resource utilization. When you virtualize your servers, you can get the most out of your hardware by utilizing the necessary hardware resources with technology like dynamic disks and dynamic memory, which provide the virtual machines with more hard drive space, more RAM as needed. This allows you to operate your infrastructure at both the minimum and the recommended requirements, because your servers will be dynamically adjusting their resources as needed, allowing the virtual servers to “trade” resource allocation when demand between them fluctuates.
What are you waiting for? Make everyone’s life easier and get the most out of your investments – start virtualizing.
- by Jon, "Gingerbread", Pentecost
In the past, each server would be on its own box because hardware was cheap and there weren’t too many technologies around that would efficiently run multiple servers on a single system. You were able to have three or four servers, each doing a separate job (one for Active Directory, DNS, DHCP, another for E-mail, and a third for doing utility/maintenance or specialized task like database).
In today’s environment, you could have only two servers that run three virtual systems each that are replicating, as well as a fourth server that you can use for testing up-and-coming technologies without the need of purchasing additional hardware. Let’s explore just one example of how things used to be and how things are today.
Let’s say that you have your typical three servers. There is a problem and the motherboard on your main AD DS server gets fried. Depending on your configuration, there may be minimal issues with users logging in if one of your other servers also has AD running but all of their files are stored on the drives in the main server – big problem. You have to wait until you can get a replacement motherboard and install it as soon as it arrives – a day and a half minimum of downtime.
Now use the same scenario, except that you have two Hyper-V servers and all three of your servers are virtualized, properly replicating between Hyper-V hosts prior to the motherboard failure. Only a few system changes to point to alternate locations (at most, as in a complete replication scenario using DFS and Exchange DAG, there is complete fail-over capabilities) and users are back up in minutes. You can now casually order and install the motherboard in the failed server, as there is little downtime for your users.
Now let’s go over a couple environmental issues (especially with “Going Green” being a big push now). How much power are all three or four of those servers using to be on, let alone the battery backup units to power them and the air conditioning to keep them cool? Using only two servers, not as much power is used to have them running, not as much heat is generated, therefore less air conditioning is needed. Most likely, the servers are older too and are not running as efficiently as a new server would be. Need I go on?
One other great benefit of virtualizing your environment is that if you need to add storage space, it is much easier expanding the size of a .vhd, as opposed to expanding out a RAID configuration. Just add another drive or two to the host and then add that space to the .vhd of the server that is running out.
There are many more benefits that we don’t have time to go into, but if you are thinking of replacing at least one of your servers, maybe now is the time to start thinking of going virtual? Contact us to review our Small Business Solutions for Philadelphia or talk with us about our Philadelphia Area Managed IT Services Programs.
- by Andrew, "I've Never Met A Contest I Didn't Like", Levin
The release of Windows Server 2008 R2 SP1 has added a great new feature to the Hyper-V role simply called "Dynamic Memory." This feature will allow your virtual machines to use memory much more efficiently than ever before. Prior to this release, Hyper-V hosts did not support any type of memory oversubscription for your VMs. Basically, if you had 12 GBs of RAM (excluding the host reserve) with 3 VMs requiring 4 GBs each, you were strictly limited to 3 VMs on that host. Now, say those VMs only ever tacked out at 3 GBs each, or only needed 4 GBs on very rare occasions. Either way, you would have 3 GBs of wasted RAM. Processor oversubscription was always possible, but memory was always a distinct and finite resource.
I'd have to say when I first saw Dynamic Memory in action I was very impressed. This is because when I first heard about its pending release a few months ago, I assumed it just meant you could manually add or remove assigned memory to a VM without shutting it down. So, if you saw your VM was tacking out at 3 GBs, you could simply provision it another GB. This assumption was incorrect, and as it happens to be, the Dynamic Memory feature in SP1 goes much further than this. The main efficiency gainer here is that Hyper-V can now automatically allocate memory to VMs on an as needed basis. Therefore, your 12 GB host will provision memory to its VMs based off of the demand of the workloads and applications running within the VM. You can set buffers and quotas for each VM but end result is that memory is provisioned and used only when memory is needed. This completely eliminates the need to assign, and waste, large amounts of memory when planning for potential peak usage.
We all know that virtualization can already substantially decrease the amount of physical servers required in your datacenter, and now Dynamic Memory can help to improve upon that consolidation ratio even more. Gaining efficiencies is the number one goal of virtualization and dynamic memory is just another one of those aspects that helps break your environment away from physical barriers. And, as always, Trigon is here to help you get the most out of you IT investment, so give us a call today!
- by Chad, "The Dream", Weaver
Yesterday was the day that Windows 7 and Windows Server 2008 R2 SP1 was officially released to the public. This is a great way to get your windows 7 machines and or Server 2008 installs up to date in a single install, the install itself is between 500 MB and 1 GB depending on your flavor 32 or 64 bit respectively. In a single install you can get all the updates and bug fixes you are missing if it has been a while since you last clicked that update button. As for the server install it comes in a .iso format weighing in at 2 GB almost.
This has been available for Microsoft’s Technet subscription service for the last week for anyone wanting to get in on the action early but today marks the first day you can get it downloaded and installed for the rest of the Microsoft-using world. Now if you are not behind on updates and are using Windows 7 this isn’t an earthshattering update and you probably won’t notice very much changing if anything at all. I still wouldn’t put it off keeping your computer up to date is one of the most important thing anyone can do to keep their computer safe and secure. If you are installing this update on just a single machine then check windows updates as a smaller focused install should be available.
The Server 2008 R2 SP1 update does include some fancy new features including the one I am most excited about. The ability to enable dynamic memory usage in Hyper-V servers, this feature is quite exciting. With this you can allow the Hyper-V guests to decide how much memory they are in need of and only consuming what is required at any given time. This allows for even denser Hyper-V hosts, because the memory is used much more efficiently. Also a new feature called RemoteFX was introduced. The rest were improvements to things such as clustering and so on. Still nothing to really get too excited about unless you have a Hyper-V install then it is something we at Trigon really recommend you check out. If you'd like to hear more, give us a buzz!
- by David, "And Don't Call Me Shirley", Quiram
Have you documented your disaster recovery plan? I am not talking about listing some tasks you want to do or where you kept your backup cd, but a full planned out process to get the IT portion of your company running again in a timely and efficient manner?
Most companies have not and some will not. Having gone through a disaster for a company which did not have a disaster recovery, I have seen where the value of having a process laid out will make the recovery less stressful for you, the owners, and the staff.
Having the plan figured out before a crisis gives you several advantages:
It gives a level of confidence in the systems, your ability, and the future in case something happens.
It begins the exercise of how would I do this before things go wrong, as opposed to how do I do this with things going wrong.
It gives you a framework to work with. No static plan will ever fully cover a dynamic event. The saying is the first casualty of any battle is the plan. But having steps mapped out, priorities determined, resources identified, and an understanding of what needs to happen will allow you to be more maneuverable to the changes in the crisis.
By documenting your plan or process for recovering your IT resources you will be showing the owners that you are thinking of the future. It isn't doom and gloom thinking, the reality is that things go wrong, thing will break, and it is never convenient. Plan ahead to reduce the issues and risks you will encounter. If you have a well thought out and tested plan in place, you will be able to support the company’s Business Continuity plan. Remember the BC plan is different from the DR plan. The DR plan is there to support the BC and should be integrated into it. The DR provides the tools for the Business to move forward after a disaster, but we need to make sure we have the tools ready and available.
I have written up several disaster recovery plans for companies that range from complicated multi server/multi-site to single server at someone's home office. Each time the planning and documentation process has proven to be invaluable. It is a time for discovery, identifying issues that would hamper the recovery process, and gaining a better understanding of what needs to happen when. The plan will also get the owner's buy-in to the process beforehand and set up realistic expectations BEFORE the disaster happens. This is very important. If you can show that the recovery effort will take 24 hours for the core infrastructure to be re-established, it will give you the time and room to do your job without the owner and management worrying what you are doing.
The plan provides the framework to address the crisis. No matter how well I have written a plan, there are unexpected events that occur which can derail the plan. Even in testing things happen, replacement servers fail, key equipment doesn’t show up, staff members are sick, network devices power supplies fail. All these have happened just in testing. I welcome these issues in testing because it adds another stress test to the plan and staff working with the plan. By identifying the priorities in the recovery, you can react and re-task resources based on that. The advantage of having an establish priority in recovery is that if you are questioned on the reasoning you have a document which was agreed upon beforehand by management and owners that you are following.
The planning of a DR is only the first step in the whole process. But it is a step that should not be taken lightly. Understand that once you start you are going to find issues with in the systems you are backing up, the thinking and views of management, owners, and staff. By getting all of this address and the plan laid out ahead of time, you will save time, reduce frustration, reduce risk, and increase success in the event of a disaster.
Make sure to take the next step and test the plan through a walk through and then an actual recovery with test machine to simulate the disaster. This will all be time and money well spent when the disaster occurs. If you'd like to have Trigon review, or create a disaster recovery plan for your business, be sure to contact us.
- by Jack Doyle
I have a 9 year old desktop PC that has been through it all. It's had the motherboard replaced, CPU replaced, RAM replaced, hard drives replaced, you name it, it has been replaced. It has an ATI All-In-Wonder video capture card that I have been using to convert old VHS tapes of home movies to MPEG videos. The problem is, ATI never wrote drivers for Vista or Win7 for this card. So, I scoured through the forums, searching high and low, for a solution but came up empty. I knew I didn't want to continue to use Windows XP and wanted to upgrade to Windows 7 but I didn't want to lose the functionality of video capturing. I decided to dual boot!
If you are not familiar with dual booting, it is the act of installing and booting to two different operating systems on a single computer, and the ability to choose which operating system to load when the computer starts up. A perfect example of why one would dual boot is the scenario I stated above. Sometimes it is still necessary to use old legacy hardware that is not supported by newer operating systems. In these cases, you may choose to dual boot your PC.
If you don't have multiple hard drives in your PC, you will need to partition the drive so that you can install each operating system on separate partitions. Next, you should install the operating systems in the order they were released. In my case, I installed XP on a small 8 GB partition, and installed Windows 7 on the remaining unallocated space. Now, when you start up your PC, you will be prompted to choose the OS you would like to load.
Maybe you have an old software package that either won't install or run correctly under Windows 7. Instead of dual booting in this case, you can utilize the power of virtualization. Windows 7 integrates with Virtual PC extremely well, allowing you to run a legacy application in XP mode. The best part of XP mode is you don't actually have to start up the virtual machine and run the app. XP mode stays hibernated until you need it. And, you never see the Windows XP instance after you install and publish your legacy application. You simply see your application running in a Windows 7 window!
As you can see, you don't have to throw out that old piece of hardware or software just because it isn't supported on your new PC. There are several ways to provide compatibility whether it be through virtualization or dual booting. However, sometimes it is better to let go of the past and buy new gear!
Mmmm, new gear. If you'd like some more tips on dual booting and grabbing some new gear for your small to mid-size business in the Philadelphia area, drop us a line!
Over the last few years server virtualization within enterprise network environments has been quickly gaining popularity. Harnessing the power of virtualization creates truly dynamic datacenters which can effectively respond to an organization’s needs. In response to the desire for greater flexibility and agility, Microsoft has added Live Migration to the R2 release of Windows Server 2008. Live Migration essentially allows an administrator to transfer a running virtual machine from one physical host to another physical host with no perceived downtime.
In order to take advantage of the Live Migration feature, there are a few prerequisites. First off, your organization needs to implement some form of shared storage, i.e. an iSCSI or Fibre Channel SAN, in order to store your virtual machines files. This is important because shared storage facilitates the ability for Live Migration to only transfer the memory state and ownership of the target VM, as opposed to an entire VHD file.
The next step is to configure the Failover Clustering feature. Failover Clustering can be configured at the host or application level. A Hyper-V host level failover cluster means that the entire VMs themselves are made highly available, as opposed to just the application they host. Guest OS failover clusters between VMs are used to maintain the high availability of applications within the VM, like SQL or Exchange for example. In our scenario, we are going for a host level cluster because we want to transfer the entire VM, not just a single service.
The way Live Migration actually works is pretty interesting. Once your VMs are configured to be highly available, Live Migration can be initiated from the Failover Cluster Manager. Once invoked, the memory pages of the target VM will begin to be copied and transferred from the source host to the destination host. However, one significant flaw in this process is that as the memory pages are being copied, the VM is still running, and thus, still modifying its own memory state. To combat this issue, all changes to the memory state are tracked during the migration process and memory pages that have been modified are categorized as “dirty pages.” Therefore, copying the VM memory state must become an iterative process, which is exactly the case. The logic behind this is that through each iteration, the amount of dirty pages which must be copied from the target will continue to decrease, eventually reaching a point where the entire working memory state of the target VM is located on the destination host.
Now, here is where things get really clever. During the iterations, the hosts are constantly computing the amount of remaining dirty pages left on the source. They also remain cognizant of the negotiated TCP timeout interval between each other, and other network traffic. Once they know the amount of remaining dirty pages is small enough to be transferred to the destination host under the TCP timeout interval, several actions are performed:
1) The target VM is paused
2) The remaining dirty pages are transferred to the destination host
3) Ownership of the VM (on the SAN) is transferred from the source to the destination host
4) ARP packets update the switching tables
5) The VM is un-paused on the destination host
All of this happens so quickly that any services being accessed over TCP will not even notice the transfer. And even if they do, all that would be required is the re-transmission of maybe a single packet. Pretty cool, huh?
So, what benefit does this provide? Well first off, planned downtime of a physical host can be a thing of the past. Need to add more RAM, swap out a processor, patch and reboot the host? Not a problem, simply Live Migrate your VMs to another host and do what you got to do. Your users will not even notice a hiccup. An even more powerful benefit Live Migration provides is the ability to dynamically transfer virtual machines to different hosts, sites or environments based off of service demand, or even imminent failure. The ability for an administrator to proactively respond to significant network events is extremely critical, therefore, System Center Operations Manager has a feature called PRO (Performance and Resource Optimization), which will integrate with System Center Virtual Machine Manager to automate the entire dynamic re-provisioning process. Moreover, SCVMM also utilizes intelligent placement algorithms which can find the best candidate to transfer a workload to, based off of past and projected metrics.
We have been hearing the term “Dynamic Data Center” for quite some time now and capabilities such as Live Migration truly help bring that notion into reality. I’m sure any administrator would love to have this feature at their disposal.