The Microsoft TechEd Twitter Army is looking for recruits to twitter off the show floor during our premier TechEd event running May 16-19 in Atlanta. If you’re going to the show and want your opinions heard, be sure to check in at the Social Media area in the Microsoft Server & Cloud Platform Booth on Monday afternoon at 12:30 pm. Recruits who do well on the Twitter front lines will compete for an Xbox 360 + Kinnect package and other prizes to be handed out at a private Twitter Army event happening Thursday at 2 pm. Those that attend the 12:30pm Tweetup will also receive a special Twitter Army badge! Don’t forget, and remember that Uncle TechEd Wants You!
Virtualization Nation,
With the release of Microsoft Hyper-V Server 2008 R2 SP1, we have once again raised the bar for providing a robust, enterprise class virtualization platform at no cost. For example, did you realize that Microsoft Hyper-V Server 2008 R2 SP1 includes RemoteFX? This new feature provides Graphical Processing Unit (GPU) accelerated video within a virtual machine. VMware's flagship product VSphere Enterprise Plus ($3500 per processor) doesn't have this capability.
Let that sink in for a moment.
GPU accelerated video within a virtual machine is an important consideration when architecting a Virtual Desktop Infrastructure (VDI) deployment. Perhaps you decide you're willing to deploy VDI using 2D virtualized video today. But what if you realize six months or a year down the road that you need 3D GPU accelerated graphics support? Do you really want to choose a virtualization platform for VDI that doesn't offer this capability today? Is VMware willing to provide this feature without requiring an upgrade ($$$)? In writing? If you review their history, that seems highly unlikely. These are key factors that you should consider when making a decision for VDI.
For more info on Microsoft Hyper-V Server 2008 R2 and R2 SP1, check out these two blogs:
Microsoft Hyper-V Server User Experience
If you've ever fired up the no-cost Hyper-V Server, you know that the UI is minimal. This is by design. The goal of Hyper-V Server is to make it easy for you to get the system configured and on the network for remote management. There's no Start menu or local GUI. Hyper-V Server instead includes a command line and SCONFIG, which is included to make it easy for you to configure the system for remote management functionality, such as:
Here's a screenshot of SCONFIG:
Once you've configured Hyper-V Server for remote management, you can manage it in a number of ways:
While these options work for most of you, a number of folks have asked for a local GUI that could be run directly on Hyper-V Server 2008 R2.
Wouldn't that be cool?
We think so too. That's exactly what our partners at 5nine built!
5Nine Hyper-V Manager
The folks at 5Nine have developed a local GUI for Microsoft Hyper-V Server 2008 R2! With the 5Nine Hyper-V Manager you can create virtual machines, virtual networks, and more. In fact, 5Nine Hyper-V Manager supports Microsoft Hyper-V Server 2008 R2 SP1 and includes the ability to manage RemoteFX and Dynamic Memory settings.
Here's a screenshot of the 5nine Hyper-V Manager:
Very cool. This is a great opportunity to point out what can be accomplished using the public Hyper-V WMI APIs which have been documented since day one.
Download Links
Here are the key links:
Cheers,
Jeff Woolsey
Windows Server & Cloud
FAQ
==============================================
Q: Did Microsoft develop this Hyper-V Manager for Microsoft Hyper-V Server 2008 R2?
A: No. The product is called 5Nine Hyper-V Manager developed by our partners at 5Nine. To learn more about 5Nine Hyper-V Manager you should check out their site here: http://www.5nine.com/5nine-manager-for-hyper-v-free.aspx
===========================================================================
Q: How much does the 5Nine Hyper-V Manager cost? What are the system requirements?
A: 5Nine offers both a free version and a $99 version. You should check out their website for the details. The big difference is that the $99 version provides local access to the VM itself.
Note: 5Nine Hyper-V Manager works with Microsoft Hyper-V Server 2008 R2 and later. It doesn't work with the original Microsoft Hyper-V Server 2008 because it requires some capabilities not included Hyper-V Server 2008, such as .NET Framework.
Q: Does Microsoft support the 5Nine Hyper-V Manager?
A: The 5Nine Hyper-V Manager was developed by the folks over at 5Nine, however, the 5Nine Hyper-V Manager uses our published Hyper-V WMI APIs, which are fully supported by Microsoft.
Q: Will Microsoft provide a local GUI?
A: Microsoft provides multiple ways to manage Microsoft Hyper-V Server remotely including:
Microsoft has no plans to provide a local GUI for Microsoft Hyper-V Server, and we are pleased to see our partners provide a solution.
Intel has recently released their new “Sandy Bridge” processors which is the second generation of the Core i3/i5/i7 processors. Most of these new processors hitting the market with the first wave of product released are designed for notebooks and a few for desktops with server processors on the way. An easy way to identify the new Sandy Bridge processors is that the processor models are 4 digits. For example, you’ll see processors such as the i7-2600k or i5-2500k and so on. There are a number of good articles on these new processors (like this) so take a look if you’re interested in what’s new.
I’m raising this topic because I want you to be aware of an issue with both Windows Server 2008 R2 Hyper-V and Microsoft Hyper-V Server 2008 R2 and the new Sandy Bridge processors and provide the solutions.
Issue: When you attempt to start a VM running on a system with a Sandy Bridge processor, the virtual machine will not start. If you go to the Event Viewer you will see an error that states: “<VM Name> could not initialize” error.
Cause: Fundamentally, this is a chicken and egg problem. :-)
Here’s the scoop. The new Sandy Bridge processors include a new extension to the x86 instruction set known as Advanced Vector Extensions (AVX). AVX is designed to improve performance for applications that are floating point intensive such as scientific simulations, analytics and 3D modeling. Since Windows Server 2008 R2 was released a few years prior to the release of AVX equipped processors, Windows Server 2008 R2 and Microsoft Hyper-V Server 2008 R2 don’t understand this new functionality and Hyper-V correctly prevents starting the virtual machine. This behavior is by design as we wouldn’t want to start a virtual machine with unknown and untested processor capabilities. The good news is that solutions are available.
Solution: There are two solutions. The recommended solution is option 1.
Q: Do the AVX instructions improve performance?
A: The AVX instructions can improve performance if applications and workloads have been designed to use these instructions.
Q: Does the Hyper-V Processor Compatibility feature have any bearing on this matter?
A: No. The Hyper-V Processor Compatibility feature is orthogonal to this matter. The fundamental issue is that Windows Server 2008 R2 and Microsoft Hyper-V Server 2008 R2 were released years before processors with AVX instructions were available and didn’t includes support for the AVX instructions in the parent operating system.
The Hyper-V Processor Compatibility feature normalizes the processor feature set and only exposes guest visible processor features that are available on all Hyper-V enabled processors of the same processor architecture, i.e. AMD or Intel. This allows the VM to be migrated to any hardware platform of the same processor architecture. For more info on Hyper-V Processor Compatibility here are some links (here and here).
With Windows Server 2008 R2 SP1 Hyper-V and Microsoft Hyper-V Server 2008 R2 SP1, we focused Hyper-V development on enhancing Virtual Desktop Infrastructure (VDI) scenarios, which resulted in the introduction of Dynamic Memory and RemoteFX. In addition, we increased the maximum number of running virtual processors (VP) per logical processor (LP) from 8:1 to 12:1 when running Windows 7 as the guest operating system for VDI deployments. In making this change and discussing the VP:LP ratio with you, I’ve noticed that there’s some confusion as to what this metric really means and how it compares to other virtualization vendors. Let’s discuss.
I’ve noticed differences in how Microsoft--versus other virtualization vendors--expresses the maximum number of virtual processors that can run on a physical processor. It seems we’ve inadvertently created some confusion as to the maximum number of supported virtual processors on a server running Hyper-V. Here’s the crux of the problem:
· Other virtualization vendors provide a maximum for virtual processors per core.
· Microsoft provides a maximum for virtual processors per logical processor, where a logical processor equals a core, or thread.
What ends up happening is that customers ask about the ratios and here’s what happens:
1. Vendor A responds 16:1 (with the qualifier that your mileage will vary…).
2. Microsoft responds 12:1 for Win7 for VDI and 8:1 for Non-VDI and all other guest OSs.
The issue is we’re comparing apples and oranges. When we talk about physical processors, that includes symmetric multi-threading where there are two threads (i.e., logical processors) per core. Remember, Microsoft provides a maximum of virtual processors per logical processor where a logical processor equals a core or thread. To do apples-to-apples comparison, when you ask about the maximum virtual processors per core for Hyper-V, the answer really is:
· Up to 24:1 for Win 7 for VDI and 16:1 for non-VDI (all other guest operating systems)
…and up to a maximum of 384 running virtual machines and/or 512 virtual processors per server (whichever comes first). To make things easy to understand, I’ve provided the formulas and tables below.
Window 7 as Guest OS for VDI
In the case of a VDI scenario with Windows 7 as the guest with a 12:1 (VP:LP) ratio, here’s the formula and the table:
(Number of processors) * (Number of cores) * (Number of threads per core) * 12
Table 1 Virtual Processor to Logical Processor Ratio & Totals (12:1 VP:LP ratio for Windows 7 guests)
Physical Processors
Cores per processor
Threads per core
Max Virtual Processors Supported
2
96
4
192
6
288
8
384
512 (576)1
512 (768)1
1Remember that Hyper-V R2 supports up to a maximum of up to 512 virtual processors per server so while the math exceeds 512, they hit the maximum of 512 running virtual processors per server.
All Other Guest OSs
For all other guest operating systems, the maximum supported ratio is 8:1. Here’s the formula and table.
(Number of processors) * (Number of cores) * (Number of threads per core) * 8
Table 2: Virtual Processor to Logical Processor Ratio & Totals (8:1 VP:LP ratio)
64
128
144
256
512
You can see that even with an 8:1 VP to LP ratio (or 16:1 VP: Core, if you prefer), Hyper-V supports very dense VM configurations. Even on a server with two physical processors, Hyper-V supports a staggering number of virtual machines (up to 256). The limiting factor won’t be Hyper-V. It will be how much memory you’ve populated the server with and how well the storage subsystem performs.
Q: You state that a logical processor can be a core or a thread? How can it be both?
A: A logical processor can be a core or thread depending on the physical processor.
· If a core provides a single thread (a 1:1 relationship), then a logical processor = core.
· If a core provides two threads per core (a 2:1 relationship), then each thread is a logical processor.
Q: This whole topic is very confusing. Why does Microsoft provide a ratio of virtual processors to logical processors? Why doesn’t Microsoft just provide a ratio of virtual processors to cores? Wouldn’t that be simpler?
A: While Microsoft could use a ratio of virtual processors per core, Microsoft uses the ratio of virtual processors to logical processors because it is more precise and more accurate. Using a ratio of virtual processors per core ignores whether the underlying physical processor is single threaded or multi-threaded. The end result is that capacity planning can be off by a factor of two. We choose to provide the most precise information so you can effectively plan Hyper-V deployments with confidence.
Q: Where are the Hyper-V maximums publicly documented?
A: The Hyper-V maximums are documented on TechNet http://technet.microsoft.com/en-us/library/ee405267%28WS.10%29.aspx. TechNet is the best place to start for Microsoft technical documentation.
Q: Why do these ratios exist? Why is there a ratio at all?
A: The Hyper-V maximums are provided to give you clear guidance as to what has been tested at scale and under load by the Hyper-V team. This allows you to effectively plan Hyper-V deployments with confidence.
Q: Do these ratios apply to other virtualization platforms?
A: Microsoft does not test any virtualization platforms except its own.
Q: Why is Windows 7 supported with a ratio of 12:1 VP:LP ratio while other operating systems are supported at a ratio of 8:1?
A: Windows 7 is supported with a 12:1 VP: LP ratio because the Hyper-V team specifically tested this configuration under load for VDI deployments based on customer input. Customers told us that increasing the VP:LP ratio was important for Windows 7 VDI scenarios to help improve density and drive down the cost per virtual machine. For other operating systems, which are overwhelmingly used for server consolidation scenarios, the feedback was that the current ratio was more than sufficient.
Q: Is the 12:1 VP:LP ratio a hard block? What happens if I attempt to start a 13th virtual machine? Will it be blocked?
A: The VP:LP ratio is a supportability metric and not a technical block. There’s no hard block. If you attempt to start more than 12 virtual machines and resources are available, Hyper-V will start them. However, this hasn’t been thoroughly tested and isn’t supported. If you call for support, expect the support team to ask you to reduce the running number of virtual machines to meet the supportability statement.
Q: Do these ratios affect licensing in any way?
A: No. Microsoft doesn’t license any products per core or charge higher premiums for processors with more cores. One of the great benefits of virtualization is being able to maximize your hardware investments. We don’t believe you should be penalized with a Core Tax. That’s about as puerile as charging you for the amount of memory you allocate to a virtual machine.
It is great to see InfoWorld acknowledge the significant progress we’ve made with Windows Server 2008 R2 SP1 Hyper-V (“Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware”). We’re excited that the reviewer recognizes what our customers and respected industry analysts have been telling us for a while now: Hyper-V is ready to “give VMware a run for its money.”
This recognition comes on the heels of the Enterprise Strategy Group’s (ESG) report on Hyper-V R2 SP1 running key Microsoft workloads. ESG tested and verified industry-leading results that showed that single servers virtualized with Hyper-V R2 SP1 scaled to meet the IO performance requirements of 20,000 Exchange 2010 mailboxes, over 460,000 concurrent SharePoint 2010 users, and 80,000 simulated OLTP SQL Server users. InfoWorld’s results and ESG’s testing leave no doubt that Hyper-V is an enterprise-class hypervisor.
There are areas, of course, where I might quibble with the reviewer’s assessment. One such area is management. We believe that Microsoft has a key differentiation point in the management capabilities built into our System Center suite.
Just this week, IDC noted that the virtualization battleground will be won with management tools: “Looking ahead, the most successful vendors in the virtualization market will be those that can automate the management of an ever-escalating installed base of virtual machines as well as provide a platform for long-term innovation.” (They also state that the year over year growth of Hyper-V is almost three times that of VMware.)
This battleground is where Microsoft stands out, with System Center’s unique ability to provide deep insight into the applications running within the virtual machines (VMs), to manage heterogeneous virtualized environments, and to serve as a strong on-ramp to private cloud computing. Unlike the solutions of all other virtualization vendors, Microsoft’s management solution can manage not only the virtualization infrastructure but the actual applications and services that run inside the virtual machines. This is key to leveraging the capabilities of virtualization and the private cloud – it’s the apps that really matter at the end of the day.
Of course, a management solution has to see all your assets to manage them. As InfoWorld and many others are starting acknowledge, the days of a monolithic virtualization solution are over. That is why, three years ago, Microsoft added VMware management to System Center. This allowed for one management infrastructure to manage all of the assets in IT, from physical to virtual, Microsoft to VMware, Windows to Linux. And with System Center 2012, we’ll extend that capability by enhancing our support for VMware and adding support for Citrix XenServer.
Virtualization is a major on-ramp to private cloud computing. As companies begin the shift to private cloud, they recognize that applications are the key services that the cloud delivers. Our customers—you—are telling us that the private cloud needs a new level of automation and management, beyond what traditional virtualization management offers. Last month at the Microsoft Management Summit, Brad Anderson talked about the advancements we’re building into System Center 2012 that will deliver against those needs.
And lastly, there is the issue of price. For the base virtualization layer, VMware’s solution is over three times the cost of the Microsoft solution. That’s a significant cost given the parity in performance and features that Hyper-V provides. Butwhen you factor in management and the private cloud, the delta becomes even more pronounced. VMware’s new Cloud and management offerings are all priced on a per-VM basis, unlike Microsoft’s, which is priced on a per-server basis. This means that the cost of VMware solution will increase as you grow your private cloud – something you should take into account now.
I strongly encourage you to look into all that Microsoft has to offer in Virtualization and Private Cloud – and I’ll continue to discuss this theme in future posts.
David Greschler
The good news just keeps coming and we’re pleased to keep the momentum rolling with the latest release of our rock stable, feature rich, standalone Microsoft Hyper-V Server 2008 R2 with Service Pack 1! For those who need a refresher on Microsoft Hyper-V Server 2008 R2, it includes key features based on customer feedback such as:
For more info on Microsoft Hyper-V Server 2008 R2, read: http://blogs.technet.com/b/virtualization/archive/2009/07/30/microsoft-hyper-v-server-2008-r2-rtm-more.aspx. Service Pack 1 for Hyper-V Server 2008 R2 includes all the rollup fixes released since Microsoft Hyper-V Server 2008 R2 and adds two new features that greatly enhance VDI scenarios:
After installing the update, both Dynamic Memory and RemoteFX will be available to Hyper-V Server. These new features can be managed in a number of ways:
Dynamic memory is an enhancement to Hyper-V R2 which pools all the memory available on a physical host and dynamically distributes it to virtual machines running on that host as necessary. That means based on changes in workload, virtual machines will be able to receive new memory allocations without a service interruption through Dynamic Memory Balancing. In short, Dynamic Memory is exactly what it’s named. If you’d like to know more, I've included numerous links on Dynamic Memory below.
Configuring RemoteFX with Microsoft Hyper-V Server 2008 R2 SP1
Although using Dynamic Memory does not need any additional server side configuration beyond installing the R2 SP1 update, enabling RemoteFX does require some additional configuration on the host. The exact steps for enabling the RemoteFX are detailed below:
1) Verify the host machine meets the minimum hardware requirements for RemoteFX.
2) Verify the host has the latest 3D graphics card drivers installed before enabling RemoteFX.
3) Enable the RemoteFX feature using the following command line:
Dism.exe /online /enable-feature /featurename:VmHostAgent
4) From a remote machine running the full version of Windows Server 2008 R2 SP1 or a client OS running the latest version of RSAT, connect to the Hyper-V Server machines, create a Windows 7 R2 SP1 virtual machine and under “Add Hardware”, select “RemoteFX 3D Video Adapter”. Select “Add”.
If the “RemoteFX 3D Video Adapter” option is greyed out, it is usually because RemoteFX is not enabled or the 3D video card drivers have not been installed on the host yet. Before attaching the RemoteFX adapter, make sure to set user access permissions, note the computer name and enable Remote Desktop within the VM first. When the RemoteFX 3D video adapter is attached to the VM, you will no longer be able to connect to the VM local console via the Hyper-V Manager Remote Connection. You will only be able to connect to the VM via a Remote Desktop connection. Remove the RemoteFX adapter if you ever need to use the Hyper-V Manager Remote Connection.
How much does Microsoft Hyper-V Server 2008 R2 SP1 cost? Where can I get it?
Microsoft Hyper-V Server 2008 R2 SP1 is free and we hope you enjoy it! Here’s the download link: Microsoft Hyper-V Server 2008 R2 SP1.
----------------------------------------------------------------
Here are the links to a six part series titled Dynamic Memory Coming to Hyper-V and an article detailing 40% greater virtual machine density with DM.
Part 1: Dynamic Memory announcement. This blog announces the new Hyper-V Dynamic Memory in Hyper-V R2 SP1. It also discussed the explicit requirements that we received from our customers. http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx
Part 2: Capacity Planning from a Memory Standpoint. This blog discusses the difficulties behind the deceptively simple question, “how much memory does this workload require?” Examines what issues our customers face with regard to memory capacity planning and why. http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx
Part 3: Page Sharing. A deep dive into the importance of the TLB, large memory pages, how page sharing works, SuperFetch and more. If you’re looking for the reasons why we haven’t invested in Page Sharing this is the blog. http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx
Part 4: Page Sharing Follow-Up. Questions answered about Page Sharing and ASLR and other factors to its efficacy. http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx
Part 5: Second Level Paging. What it is, why you really want to avoid this in a virtualized environment and the performance impact it can have. http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx
Part 6: Hyper-V Dynamic Memory. What it is, what each of the per virtual machine settings do in depth and how this all ties together with our customer requirements. http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx
Hyper-V Dynamic Memory Density. An in depth test of Hyper-V Dynamic Memory easily achieving 40% greater density. http://blogs.technet.com/b/virtualization/archive/2010/11/08/hyper-v-dynamic-memory-test-for-vdi-density.aspx
In speaking with folks interested in deploying virtualization, I tend to hear two things most:
1. they realize the benefits of adopting virtualization (especially the cost savings!), and the bridge virtualization can give them to private clouds in the future . . . 2. . . . but there are roadblocks preventing them from realizing those benefits.
The roadblocks mentioned vary, but a few themes do stick out. We hear your concerns that with some vendors, scaling-up virtualization instances scales up your cost, and that’s a hard pill to swallow when you’re looking to get more out of every IT dollar. We also hear your concerns that large-scale virtualization could lead to VM sprawl and cumbersome manual IT process overhead.
We are committed to helping you with these and other concerns. Today, Microsoft and Dell are announcing a strategic partnership that will deliver joint management and virtualization solutions to help you get more out of your investment by integrating Dell’s hardware, storage and virtualization management technologies with Microsoft’s Windows Server 2008 R2 Hyper-V and System Center technologies. Customers will benefit from this “better together approach” with solutions that span physical and virtual infrastructure as well as application and workload layers.
These jointly engineered solutions will make virtualization more cost effective and accessible, integrate management across the stack, and set you on the path to private cloud – but you don’t have to wait to get started. Dell’s Business Ready Configuration, based on Microsoft’s Hyper-V Cloud Fast Track reference architecture, is available today and can help you start realizing the benefits of virtualization and begin your journey to private cloud.
You can find out more about today’s solutions, our new partnership and our plans for the future by visiting http://www.microsoft.com/virtualization/dell or on their blog here.
Ed Anderson
Hello Everyone –
Our sincere thanks to everyone who attended the Microsoft Management Summit last month in Las Vegas. Believe me when I say that this year’s sold-out event was one of the largest ever in terms of product announcements and news.
In addition to the multitude of announcements we made at MMS, we demonstrated that Hyper-V is the best platform to virtualize key business critical workloads – SQL Server, SharePoint and Exchange Server.
Case in point - Enterprise Strategy Group, a third party analyst firm conducted detailed performance analysis virtualizing these key workloads on Hyper-V and posted their findings here, showing Hyper-V can be used to virtualize Tier-1 data center applications with confidence.
A few of the key findings were:
In addition, you will find detailed collateral that includes technical guidance and best practices from the Microsoft and server partners for virtualizing SQL Server, SharePoint and Exchange on Hyper-V here.
More than ever, this is relevant now as companies are looking to move more of their traditional workloads to private clouds. By using the guidance above, and along with our private cloud offerings including Hyper-V Cloud Fast Track, companies can accelerate their implementation of private cloud.
Stay tuned as we continue to provide additional guidance and training to help you on your private cloud journey.
Thanks!Arun JayendranGroup Product Manager, Virtualization and Private Cloud Team
One common refrain we hear from you is that you appreciate the fact we’re driving down the costs of virtualization and adding more and more capabilities in the box such as Live Migration (LM) and High Availability (HA). We’re happy to do it and we’re just getting started. :) To use both LM and HA, these require shared storage. This shared storage can be in the forms of SAS, iSCSI or Fiber Channel SAN. For many environments this isn't an issue, but there are some specific scenarios where customers need LM and HA and the cost of a dedicated SAN is a blocker. For example,
Wouldn't it be great to have another option? We think so too. Today, as a big THANK YOU to our Windows Server 2008 R2 customers we are taking another step in lowering the barriers and making it even easier to take advantage of Windows Server 2008 R2 Hyper-V High Availability and Live Migration.
>> We are making the Microsoft iSCSI Software Target available AS A FREE DOWNLOAD. <<
What does this mean? It means you can install the Microsoft iSCSI software target on a Windows Server 2008 R2 system and use it as shared storage for Live Migration. Interested? Here are a few key pointers.
The full announcement about the release of the Microsoft iSCSI Software Target from Jose Barreto
The Microsoft iSCSI Software Target Download
Configuring the Microsoft iSCSI Software Target with Hyper-V blog from Jose
============================================================================ FAQ============================================================================Q: The Microsoft iSCSI Software Target is now free. Is it supported in a production environment?
A: Yes. The Microsoft iSCSI Software Target is supported in a production environment. The Hyper-V team regularly tests with the MS iSCSI Software Target and it works great with Hyper-V.============================================================================Q: On what operating systems is the Microsoft iSCSI Software Target supported?
A: The Microsoft iSCSI Software Target is supported for Windows Server 2008 R2 Standard, Enterprise and Datacenter Editions.============================================================================Q: Can the free Microsoft Hyper-V Server 2008 R2 use the free Microsoft iSCSI Software Target?
A: Yes and No. Yes, Microsoft Hyper-V Server 2008 R2 can act as a client to access virtual machines via iSCSI. The way to do that is to type iscsicpl.exe at the command prompt to bring up the Microsoft iSCSI Initiator (client) and configure it to access an iSCSI Target (server). However, you can't install the Microsoft iSCSI Software Target on a Microsoft Hyper-V Server. The Microsoft iSCSI Software Target requires Windows Server 2008 R2.
Jeff WoolseyGroup Program Manager, VirtualizationWindows Server & Cloud
We just completed a great week at MMS 2011 in Las Vegas. To say it was a busy week would be a huge understatement. To everyone that attended, our sincere thanks.
I spoke to a lot of folks at the show and the feedback was overwhelmingly positive. Whether it was the announcements for:
…or the fact that every product in the System Center portfolio is being revved this year, everyone’s excited to see what the System Center 2012 releases have to offer. As usual, the hands-on labs and instructor-led labs continue to be some of the most popular offerings at MMS. MMS Labs offer folks the opportunity to kick the tires on all of the existing and newly released and Beta products. As usual the lines started early.
MMS 2010: Quick Refresher
For the second year in a row, all of the MMS Labs were 100% virtualized using Windows Server 2008 R2 Hyper-V and managed via System Center by our partners at XB Velocity and using HP servers and storage. MMS 2010 was the first year all of the labs were provided via virtualization. In previous years, the MMS Labs were all delivered using physical servers. To say moving from physical to virtual was a huge success would be an understatement. Here are a few apposite stats comparing MMS 2009 to MMS 2010 last year:
Power reduction of 13.9x on the servers:
Power reduction of 6.3x on the clients:
Finally, a total of 40,000 VMs were delivered over the course of MMS 2010 on 3 racks of servers. (Technically, it was 6 half racks, but since we used full racks this time, I’m calling it 3 racks so we’re making an apples to apples comparison…)
MMS 2010 Labs went so smoothly, that a similar setup was used for TechEd 2010, which performed just as well. After setting the bar so high, the team eagerly took on the challenge of improving on last year with MMS 2011. Specifically,
MMS 2011: Servers
Last year, we used HP ProLiant DL380 G6 Rack Servers. This year we decided to use HP BL460c G7 Blades in a c7000 enclosure. Moving to HP’s BladeSystem allowed us to:
From a memory standpoint, each blade was populated with 128 GB of memory the same as in each rack server last year. However, since we were using fewer servers this year (32 this year versus 41 last year) the total memory was reduced by over 1 Terabyte. At the same time, we delivered more labs running more virtual machines than ever.
>> By using Windows Server 2008 R2 SP1 Dynamic Memory, we were able to reduce the physical memory footprint by over 1 Terabyte and still deliver more labs running more virtual machines than ever. That’s a saving of ~$80,000. <<
Hyper-V Dynamic Memory rocks!
By making these changes, the team reduced the number of racks from 3 to 2. Here’s the side-by-side comparison of MMS 2010 versus MMS 2011 from a server standpoint:
You can see that across the board and in every possible metric, the MMS 2011 servers are a significant improvement over last year. The systems are more powerful, offer greater scalability, improved performance, reduced power consumption, and fewer cables to manage; and they reduced the physical footprint by a third.
MMS 2011: Storage
Last year the team used local disks in every server. This year, they decided to change their storage strategy. Here’s what they did.
This new storage strategy resulted in massive improvements. Using the HP I/O Accelerator Cards, total IOPS performance improved by ~23,600% (no, that’s not a typo) and using the SAN allowed the team to centrally manage and share master virtual machines; every blade was a target for every lab from every seat at MMS. This strategy provided an unprecedented amount of flexibility. If we needed an extra 20 Configuration Manager labs from 1:00-2:00 and then needed to switch those to Virtual Machine Manager labs from 2:00-3:00 or Operations Manager labs from 3:00-4:00 we could. That is the flexibility of private cloud.
Here’s the side-by-side comparison of MMS 2010 versus MMS 2011 from a storage standpoint:
The results were simply jaw-dropping.
>> On two racks of servers, we were able to provision 1600 VMs in three minutes or about 530 VMs per minute. <<
MMS 2011: Time for the Diagrams and Pictures
Here’s a picture of the two racks powering all of the MMS 2011 Labs. You can see them behind the Plexiglas. What you don’t see are the crowds gathered around pointing, snapping pictures, and gazing longingly…
Here’s a diagram of the rack with the front of the rack on the left and the back of the rack on the right. The blue lines are network cables and orange lines are fiber channel. Remember, last year we had 82 network cables; this year a total of 12 cables, 8 for Ethernet and 4 for Fiber Channel.
MMS 2011: Management with System Center. Naturally, the MMS team used System Center to manage all the labs, specifically Operations Manager, Virtual Machine Manager, Configuration Manager, and Service Manager.
Operations Manager 2012 Pre-Release was used to monitor the health and performance of all the Hyper-V labs running Windows and Linux. To monitor health proactively, we used the ProLiant and BladeSystem Management Packs for System Center Operations Manager. The HP Management Packs expose the native management capabilities through Operations Manager such as:
It looks like this:
In terms of hardware, System Center had its own dedicated hardware. System Center was deployed in virtual machines on a Hyper-V three-node cluster for HA and Live Migration if needed. (It wasn’t.) Networking was 1 Gb/E and teamed for redundancy. For storage, iSCSI over 1 Gb/E was used with multi-path I/O and the SAN was provided by the HP Virtual SAN Appliance (VSA) running within a Hyper-V virtual machine.
MMS 2011: More Data
Here’s more data…
Hyper-V Mosaic
One cool application that the Lab team wrote is called Hyper-V Mosaic. Hyper-V Mosaic is a simple application that displays thumbnails of all running virtual machines. The screenshot below was taken at 2 PM Wed March 23. At the time, 1154 VMs were running on the 32 Hyper-V servers. The mosaic display is intended to provide the attendees with a sense of scaling of the Private Cloud solution. All of the thumbnails are live and updating. (More on Hyper-V Mosaic below…)
Here’s a screenshot:
MMS 2011: Let’s Take this to 11
After a few days of running thousands of VMs in hundreds labs without issue and seeing that the hardware wasn’t being taxed, the team was very curious to see how just how many virtual machines they could provision. So, one night after the labs were closed the team decided to see how many VMs they could run…
Here’s a screen shot from PerfMon:
MMS: Physical Footprint Over the Years…
In terms of physical footprint, the team was allocated 500 sq. feet for MMS 2011 Labs and needed only 17 sq. feet. Here’s how the footprint has dropped in the last three years:
MMS 2011: Success!
As you can see that across the board and in every possible metric, the MMS 2011 system was a significant improvement over last year. It’s more powerful, offers greater scalability, improved performance, reduced power consumption, fewer cables to manage and used a third less physical footprint.
From a Windows Server Hyper-V standpoint, Hyper-V has been in the market three years and this is just another example of how rock-solid, robust and scalable it performs. Hyper-V Dynamic Memory was a huge win for a variety of reasons:
From a management perspective, System Center was the heart of the system providing health monitoring, ensuring consistent hardware configuration and providing the automation that makes a lab this complex successful. At its busiest, over 2600 virtual machines had to be provisioned in less than 10 minutes. You simply can’t work at this scale without automation.
From a hardware standpoint, the HP BladeSystem Matrix is simply exceptional. We didn’t fully max out the system in terms of logical processors, memory, I/O Acceleration and even at peak load running 2000+ virtual machines, we weren’t taxing the system. Not even close. Furthermore, the fact that HP integrates with Operations Manager, Configuration Manager and Virtual Machine Manager provides incredible cohesion between systems management and hardware. If you’re looking for a private cloud solution, be sure to give the HP Cloud Foundation for Hyper-V a serious look. Watch the video where Scott Farrand, VP of Platform Software for HP, talks about how real HP and Microsoft are making private cloud computing.
Finally, I’d like to thank our MMS 2010 Platinum sponsor, HP, for their exceptional hardware and support. The HP team was extremely helpful and busy answering questions from onlookers at the lab all week. I have no idea how we’re going to top this.
P.S. More pictures below…
Here’s a close up of one of the racks:
HP knew there was going to be a lot of interest, so they created full size cardboard replicas diagraming the hardware in use. Here’s the front:
…and here’s the back…
During the show, there was a huge display (made up of a 3x3 grid of LCDs). This display was located at the top of the elevator going from the first to the second floor at the Mandalay Bay Convention Center. Throughout the week it was used for messaging and hot items of the day. On the last day, the event switched the big display screen at the top of the elevator over to show the Hyper-V Mosaic display. This turned out to be a huge hit. People came up the elevator, stopped, stared and took pictures of the display screen. The only problem is that we inadvertently created a traffic jam at the top of the elevators. Here’s the picture:
At MMS 2011 this week, Brad Anderson, Corporate Vice President, Management and Security Division, talked about how, with private cloud computing, it is all about application. One of the associated product announcements was the release of System Center Virtual Machine Manager 2012 Beta. One of the features of this beta release is Microsoft Server Application Virtualization. Server Application Virtualization (or Server App-V for short) allows you to separate the application configuration and state from the underlying operating system.
Server App-V packages server applications into “XCopyable” images, which can then be easily and efficiently deployed and started using Virtual Machine Manager without an installation process. This can all be accomplished without requiring changes to the application code, thus mitigating the need for you to rewrite or re-architect the application. This virtualization process separates the application and its associated state from the operating system thereby offering a simplified approach to application deployment and servicing.
By virtualizing your on-premises applications with Server App-V, you will be able to decrease the complexity of application and OS updates and deployment. This capability is delivered through System Center Virtual Machine Manager 2012 Beta enabling private cloud computing. By abstracting the application from the operating system, an organization will have fewer application and OS images to maintain, thereby reducing the associated administrative effort and expense. On deployment, we will dynamically compose the application using the Server App-V package, the OS and hardware profiles, and the Virtual Hard Disk (VHD). You will no longer need to maintain VM Templates for every application you will deploy.
By now, it should be easy to see why Microsoft believes Server App-V is a core technology for the next generation of our datacenter and cloud management capabilities, and is central to the “service centric” approach to management that will be enabled with the System Center 2012 releases.
Back in December 2010, we also announced a private CTP (Community Technology Preview) of Server Application Virtualization targeted at delivering application virtualization capabilities on Windows Azure. This (along with the Windows Azure VM role) offers an opportunity to move some existing applications to Windows Azure. Specifically for Server App-V this means packaging an existing application and running it directly on the Windows Azure worker role. This capability is not part of the System Center Virtual Machine Manager 2012 Beta and to reiterate is available today only in a private CTP.
Which applications can Server Application Virtualization virtualize as part of System Center 2012?
Microsoft is prioritizing business applications such as ERP applications. As with Microsoft Application Virtualization for the desktop there is not a list of applications that Server Application Virtualization will support. However, there are a number of architectural attributes that the initial release of this technology has been optimized for. These attributes include:
Applications that do not have these attributes may be supported in later versions. The following applications or architectural attributes will not be supported in V1:
So, today, we encourage you to download the System Center Virtual Machine Manager 2012 Beta and give Server Application Virtualization a try against your existing applications! We look forward to your feedback.
We’re having a great week at a sold-out MMS 2011 in Las Vegas! To say it’s a busy week would be a huge understatement. Honestly, there’s so much happening, I can’t blog about all these topics and begin to do any of them justice. So I’m going to provide a high level description with links to the details. Trust me--you’re going to want to check this out in depth. I’ll cover Target Corporations’ virtualization success, System Center 2012—including System Center Virtual Machine Manager (VMM) 2012 beta, System Center Advisor beta, System Center Configuration Manager 2012 beta—and the release of Windows Intune.
Target CorporationFirst let’s start off with our great multi-year partnership with our valued customers at Target Corporation. Target has been a long-time Microsoft virtualization customer and partner. We’ve worked together to drive greater efficiencies and flexibility while reducing risk, improving agility, and lowering costs. To that end, Target has deployed Windows Server and System Center to manage and support over 15,000 virtual machines running mission critical applications in their stores and datacenter. Here’s a snippet from the case study:
With its attractive stores offering trendy merchandise at affordable prices, Target changed how consumers think about discount shopping. To help Target deliver on its “Expect More. Pay Less.” brand promise, Target chooses reliable, scalable, and cost-effective technology. That’s why the company is deploying Windows Server 2008 Datacenter and its Hyper-V virtualization technology to retire 8,650 servers and implement a two-servers-per-store policy. By 2012, Target’s entire store server infrastructure will be running on Hyper-V, which will support a total of 15,000 virtual machines running mission-critical applications. Target also deployed Microsoft System Center data center solutions to manage more than 300,000 endpoints across its retail network. With its Microsoft Virtualization solution, the company will save millions of dollars in hardware, electrical, and maintenance costs.
Want to know more? :) Then check out the detailed case study.
…and this blog post
System Center 2012 ReleasesNow, let’s move to the System Center 2012 releases. To put it succinctly, in the next year, we’re developing 2012 releases for the entire System Center suite including Operations Manager (SCOM), SCCM, Data Protection Manager (DPM), Service Manager, Orchestrator (formerly Opalis), and VMM. We’re also adding two new products to the System Center family, System Center Advisor and Project Codenamed “Concero.”
You may want to read that last paragraph again. That’s a lot of cool stuff, and if you’re not using System Center already, there isn’t a better time to start. Let’s dive into the details.
System Center Virtual Machine Manager 2012 BetaVMM 2012 has moved VMM beyond just being a centralized management product for managing virtual machines. VMM 2012 enables you to:
To learn more about VMM 2012, check out Rakesh Malhotra’s blog.
System Center Advisor BetaSystem Center Advisor (formerly Microsoft codename “Atlanta”) is a cloud service that enables IT professionals to assess their server configuration and proactively avoid problems. With System Center Advisor, support staff are able to resolve issues faster by accessing current and historical configuration data, all with the security features that meet their needs. Additionally, System Center Advisor helps reduce downtime by providing suggestions for improvement, and notifying customers of key updates specific to their configuration.
Think of it this way: At Microsoft, we spend a lot of time developing best practices, whitepapers, and Knowledge Base articles (KBs) to ensure you know how we test and validate configurations and how you can achieve the best performance and efficiency from your deployments. We want to take that knowledge and make it easier for IT to access. With System Center Advisor, we’ve created a Windows Azure service so you can log into System Center Advisor from anywhere (no need to install anything on premise, except the Operations Manager Agent on the server) and manage your service. From there, System Center Advisor can examine your system and determine if you’re running your service optimally. It will recommend KBs, point out missing updates, and make QFE recommendations. Furthermore, the knowledge in Advisor is coming directly from the Microsoft product teams and will be updated regularly.
Finally, because System Center Advisor is a Cloud Service you get true anywhere access and will automatically scale to the size of your business. This is what the cloud is all about.
System Center Configuration Manager 2012 BetaSCCM enables a powerful user-centric approach to client management. This approach addresses the growing reality: People want to move fluidly between multiple devices and networks. To help manage this, SCCM makes it easier for IT to support users with configurations tied to their identity instead of to individual systems or devices. As a result, IT can help people work the way they want, practically wherever they want—with a familiar experience across different devices and contexts.
In this release Configuration Manager has also significantly raised the bar for Virtual Desktop Infrastructure (VDI). Today, Configuration Manager is used in two out of three enterprises to manage enterprise desktops worldwide. We want to provide an intelligent way to improve the ability to manage VDI deployments (e.g., deploy App-V packages, manage patches, and more) with better granularity based on whether the VDI deployment uses pooled or personalized VMs, for example. Finally, we’re very pleased to announce improved integration with Citrix XenApp!
Windows Intune is now released!The Windows Intune cloud service delivers management and security capabilities through a single Web-based console so you can keep your computers and users operating at peak performance from anywhere. Give your users the best Windows experience with Windows 7 Enterprise, or standardize your PCs on the Windows version of your choice. Windows Intune fits your business by giving you big tech results with a small tech investment. The result? Less hassle; and you get peace of mind knowing that your employees' PCs are well managed and highly secure. Windows Intune enables you to:
Because Windows Intune is a cloud service, you get true anywhere access. Windows Intune will automatically scale to the size of your business. This is what the cloud is all about. (Hmm, anyone seeing a trend here… :))
As you can see, there’s a lot happening at MMS 2011, and this blog really is just scratching the surface. Please take the time to review the links above. If you haven’t invested in System Center, now is the time! More on MMS soon!
Today Microsoft announces new IT solutions to streamline PC and device management, empower productivity and enable the modern enterprise, highlighting the release of Windows Intune, System Center Configuration Manager 2012 Beta 2 and the next version of Microsoft Desktop Optimization Pack (MDOP).
See the full press release here - and download System Center Configuration Manager 2012 Beta 2 today!
From the sold-out Microsoft Management Summit in Las Vegas, Brad Anderson posts about empowering our customers with Private Cloud computing. This post comes directly from the Official Microsoft Blog. View the related press release here.
Private Cloud Computing: It’s all About the Apps
This is a big week for Microsoft and many of our enterprise IT customers. We’re hosting the Microsoft Management Summit (MMS) in Las Vegas, a sold-out IT conference where I have the honor of delivering two keynote addresses to approximately 4,000 attendees.
The theme of MMS is “You. Empowered,” which is particularly relevant to cloud computing – especially private cloud computing. In this new computing paradigm, we see the ability for IT organizations to empower their companies to more effectively deliver the business applications they need to compete and succeed.
We have just undergone a period in IT where cost pressure drove an intense focus on server consolidation and virtualization. It is important, however, that our industry recognizes that the promise of cloud computing is different from virtualization. While virtualization benefits were “all about the infrastructure,” cloud computing will prove to be “all about the application”.
In fact, an agile infrastructure that is disconnected from the applications it supports simply does not serve the business. I hear this every time I speak with IT leaders. They tell me service level agreements for applications are top of mind, including speed of deployment, troubleshooting and overall visibility. Ultimately, they are looking to ensure that their apps reliably do what they are intended to do for employees and customers.
In cloud computing, I see great opportunity for these customers. If you think about it, cloud computing is, at its core, focused on applications. In the very large cloud centers, where we run the Windows Azure, Office 365, Xbox Live, and Bing services (among others), application service levels are all that matter. We have optimized everything – the facility, the infrastructure, the processes – to deliver on-demand, standardized IT services that run on shared resources. While Microsoft’s investment in these types of cloud centers is unique, the best practices can be directly applied to every IT environment.
So, how does a company get there? Certainly virtualization is an important step, and many organizations are at that phase in their “journey to the cloud.” (Case in point: retail leader Target Co. using our Hyper-V technologies.) But how can you be sure your company is well positioned to move to cloud computing, ready to meet the business demands for faster, more reliable application services?
You start with a management infrastructure that is designed to empower IT. A management infrastructure that brings together the network, storage and computing islands into an integrated fabric. A management infrastructure that can create clouds with just a few clicks. Management that empowers the business to build, deploy and scale applications on their terms. And management infrastructure that is not just “virtual machine aware,” but delivers the application insight that your business depends upon, spanning both private and public cloud services.
That kind of management can make the difference between IT really delivering business value and just managing cost and complexity. In addition, IT administrators will find that it unlocks career opportunity, as well. Just as we move from individual servers to infrastructure fabric, server administrators will see an opportunity to lead the journey to the cloud.
That’s what we’re showing at MMS today. (You will soon be able to view my keynote speeches and read our news announcements here.) Our management offerings are designed to help IT organizations build private cloud solutions that deliver application services, not just virtual machines. With our approach, the applications drive the IT infrastructure, not the other way around. The management technologies at the center give both IT managers and application managers throughout the company a unified view into applications in private, public and hybrid cloud scenarios.
Finally, with a Microsoft private cloud, customers can use the infrastructure they know and own today to build and deliver private cloud computing as a managed service, including other vendors’ tools, platforms and virtualization technologies. We emphasize putting our customers’ needs ahead of any particular technology.
If you’re interested in more details, I invite you to watch my keynote speeches here, or visit our Hyper-V Cloud web site. Let us know your thoughts!
Posted by Brad AndersonCorporate Vice President, Management and Security Division, Microsoft
Hi everyone. I’m Fritz DeBrine, senior group manager for Server Technology & Enterprise Storage with Target Corporation. Here with me is Keith Narr, technical architect consultant in our Infrastructure Strategy & Architecture team. It’s been fun reading this blog and learning about how real-life workloads are being deployed on Windows Server 2008 R2 with Hyper-V and managed with System Center. We would like to share with you how we use these solutions in our stores and a brief overview of how we got there.
Today, inside each of our stores’ control rooms, we run Hyper-V on a pair of Dell R710’s hosting our mission-critical guest-facing apps like point-of-sale, pharmacy, assets protection, SQL Server, and our in-store processor. Target has 1,755 stores around the country and performance and availability is really critical. So is managing and securing all those servers.
We started virtualizing in stores back in 2006 with Microsoft Virtual Server (MSVS). We evaluated and compared VMWare and Microsoft and, based on our analysis of the technologies at the time and on our close relationship with the Microsoft product teams, we felt Microsoft offered the highest value for our investment. The first virtual machine we deployed to our stores was actually a SUSE Linux instance running our pharmacy application. Things were running well and we created three additional workloads on MSVS over the next 18 months. We migrated two more existing workloads; SQL Server and our In Store Processor and also created a new server instance within the store to host infrastructure services such as System Center Configuration Manager.
But then came the summer of 2009. We identified a performance bottleneck within one of our virtual machines which runs SQL Server. This performance bottleneck was affecting how long it took our store team members to perform certain job functions. And that brings us to how we deployed Hyper-V remotely to all our stores inside 45 days.
We noticed it was taking our team members longer to unload trucks at the stores. Our teams use handheld devices to scan merchandise as it arrives and the replenishment application was constrained. At Target, we want to deliver the right product to our store shelves so it’s there when our guest needs it. Because we were approaching our busy holiday season, we needed to move quickly. Rolling out new hardware just wasn’t an option.
We have a very short window of time to perform maintenance and upgrades. Our conversion process was deployed using a combination of Target and Microsoft written scripts which did an upgrade-in-place with no additional hardware to be deployed to the store. The four existing workloads were migrated within a two hour outage window at each store with a failure rate across the chain below 3 percent. The diligence put into design and testing allowed us to complete conversion of the entire chain inside our tight timeline and ensure stability prior to our peak holiday season. Hyper-V satisfied the demands of our replenishment application and SQL Server, and helped us get those trucks unloaded on schedule.
Today we use System Center Operations Manager and System Center Configuration Manager to manage more than 15,000 servers and 29,000 workstations in our stores. Add to that more than 52,000 registers and thousands of kiosks. We also have System Center Configuration Manager agents on almost 70,000 mobile devices. Add in the rest of the Target enterprise, and we have more than 300,000 endpoints.
We continue to rely on Microsoft technologies and participate with Microsoft in their Technology Adoption Programs (TAP) whenever possible. Our membership within the Microsoft Hyper-V TAP program and the direct support of the Hyper-V product team really enabled this upgrade and the elimination of the performance bottleneck. We hope you’ve enjoyed reading our success story as much as we enjoy reading the others!
For additional details about Target’s use of Microsoft virtualization and management technology, read the full case study.
Virtualization Nation,In my last blog, we announced the RTM (Release to Manufacturing) of Service Pack 1 for Windows 7 and Windows Server 2008 R2 SP1. The bits will be available for download on Feb. 22, so mark your calendars.
A frequent follow-up question to hit my inbox was from folks interested in a list of documented changes included in Windows 7 and Windows Server 2008 R2 SP1 in addition to Dynamic Memory and RemoteFX.
No problem.
Here’s the link to the documentation for Windows 7 and Windows Server 2008 R2 SP1 (KB976932). This KB includes:
While the current version posted is for the Service Pack 1 Release Candidate, the final version will be available shortly for the RTM version.VMware and ASLR Follow-Up
In my last blog, I discussed the importance of Address Space Layout Randomization (ASLR) as an effective, transparent security mitigation built-into Windows 7. I noted that independent security analysts wholeheartedly agree on the importance of ASLR. I also stated we have serious concerns that VMware was recommending customers disable ASLR to achieve better density.Following that blog post, we were contacted by Jeff Buell from VMware.
From Jeff Buell, Perf Engineering at VMware
I'm from the performance engineering team at VMware. We take both performance recommendations and security very seriously. As you state, ASLR is a good security feature. VMware has never recommended disabling it. If you have a reference saying otherwise, I'd love to see it.
First, let me say thank you to Jeff Buell for his swift response. I’m glad to see that Microsoft and VMware Engineering agree that ASLR is a good security feature and that disabling ASLR is a terrible suggestion. Jeff appears to be concerned and willing to rectify this situation. Again, thank you Jeff. Here are the specifics.
Looks Like It Started Here…
It appears that the suggestion to disable ASLR began right here on VMware’s public blog page.
http://blogs.vmware.com/view/2009/04/vista-and-vmware-view.html
The post casually mentions that disabling ASLR will “lower overall security,” and then continues to make things worse by telling people to disable NX and DEP, two additional security mitigations. Because of this post, others picked up on this recommendation (such as in VMware’s community forums) and promoted this idea without anyone from VMware disputing this unfortunate suggestion:
http://communities.vmware.com/message/1294525#1294525
At first, I thought these were isolated incidents, but then I started receiving regular inquiries from customers who said they were considering a VDI deployment and specifically asking if Microsoft had a recommendation or support stance regarding ASLR. Considering the fact that ASLR is transparent and you have to go out of your way to disable it (you have to be admin and then go to the Registry), I knew this wasn’t isolated anymore.
Finally, at VMworld 2010 in Europe, VMware Director of Product Marketing, Eric Horschman, delivered session TA8270 titled, Get the Best VM Density From Your Virtualization Platform.
In this session, a slide was presented with the following:
Best practices> Blame storage first - avoid bottlenecks> Upgrade to vSphere 4.1 for memory compression> Install VMware tools in guest OSes to enable ballooning> Protect your critical VMs> Add VMs until “active” memory overcommit is reached> Allow DRS to balance VMs across your cluster
Advanced techniques
> Use flash solid state disks for ESXi swapfile datastore (for overcommitted hosts)> Adjust HaltingIdleMsecPenalty (KB article 1020233)> Consolidate similar guest OSes and applications to assist Transparent Page Sharing> Disable ASLR in windows 2008/Windows 7 guests for VDI workloads
When a VMware Director is promoting such poor advice, we were concerned our customers were putting themselves in undue risk and wanted to clearly articulate the Microsoft position. There is an apparent disconnect between VMware engineering and marketing on this topic, and I’m glad to see the engineering team speak out.
Again, my thanks to Jeff Buell from VMware Engineering for his quick response to this matter. I’m going to assume that VMware will clarify their position internally and appropriately message their position externally by fixing these external links. I’d be relieved to see VMware no longer recommend users disable fundamental security mitigations, such as ASLR, any further.
In my next blog, I’ll discuss some points you should consider make when determining what guest OS to deploy for VDI.
Jeff WoolseyGroup Program Manager, Hyper-VWindows Server & Cloud
So you heard about the Hyper-V Cloud Fast Track program and wonder … what exactly is it? Is it marketecture or is there some meat to it? What do these “pre-architected, pre-validated” solutions consist of, and how were those decisions arrived at? What was the architecture and validation methodology? Well my friend, this post is for you.
Myself and David Ziembicki authored the Hyper-V Cloud Fast Track Reference Architecture and Validation Guide used to align Microsoft and OEM partners on a common architecture and for OEMs to re-use and expand upon for their own Reference Architectures. Dave and I are Solution Architects within Microsoft Services in the US Public Sector organization.
A few details to get out of the way: First, each OEM (HP, Dell, IBM, Hitachi, Fujitsu, NEC) brings something unique to the table. Each OEM partner will be jointly publishing with Microsoft their Hyper-V Cloud Fast Track Reference Architecture, which will detail the hardware specifications, configurations, detailed design elements, and management additions. Available to you right now are some great resources such as solution briefs, a new white paper, and Fast Track partner web sites. In this post I will share with you the common architecture elements that apply to all program partners and how those decisions were made.
Next, I’d like to direct you to the Private Cloud TechNet Blog where I have detailed the Principles and Concepts which underlay the architecture of this program. Now, those principles are actually pretty lofty goals and the program will address more and more of them over time. A brief preview of the concepts is listed below. I feel it’s important to provide a glimpse of them now because they are what Hyper-V Cloud Fast Track aims to achieve. Please reference the post for deeper insight on these.
Private Cloud Concepts
Resiliency over Redundancy Mindset – This concept moves the high-availability responsibility up the stack from hardware to software. This allows costly physical redundancy within the facilities and hardware to be removed and increases availability by reducing the impact of component and system failures.
Homogenization and Standardization – by homogenizing and standardizing wherever possible within the environment, greater economies of scale can be achieved. This approach also enables the “drive predictability” principle and reduces cost and complexity across the board.
Resource Pooling – the pooling of compute, network, and storage that creates the fabric that hosts virtualized workloads.
Virtualization – the abstraction of hardware components into logical entities. I know readers are of course familiar with server virtualization, but this concept speaks more broadly to benefits of virtualization across the entire resource pool. This may occur differently with each hardware component (server, network, storage) but the benefits are generally the same, including lesser or no downtime during resource management tasks, enhanced portability, simplified management of resources, and the ability to share resources.
Fabric Management – a level of abstraction above virtualization that provides orchestrated and intelligent management of the fabric (i.e., datacenters and resource pools). Fabric Management differs from traditional management in that it understands the relationships and interdependencies between the resources.
Elasticity – enables the perception of infinite capacity by allowing IT services to rapidly scale up and back down based on utilization and consume demand
Partitioning of Shared Resources – While a fully shared infrastructure may provide the greatest optimization of cost and agility, there may be regulatory requirements, business drivers, or issues of multi-tenancy that require various levels of resource partitioning
Cost Transparency – provides insight into the real costs of IT services enabling the business to make informed and fair decisions when investing in new IT applications or driving cost-reduction efforts.
Hyper-V Cloud Fast Track Architecture Overview
With the principles and concepts defined we took a holistic approach to the program thinking first about everything that would be ideal to achieve an integrated private cloud and pairing down from there to what now forms the first iteration of the offering. As stated, future versions will address more and more of the desired end-state.
Scale Unit
Scale Units represents a standardized unit of capacity that is added to a Resource Pool. There are two types of Scale Unit; a Compute Scale Unit which includes servers and network, and a Storage Scale Unit which includes storage components. Scale Units increase capacity in a predictable, consistent way, allow standardized designs, and enable capacity modeling.
Server Hardware
The server hardware itself is more complex that it might seem. First, what’s the ideal form-factor? Rack-mount or Blade? While we certainly have data that shows blades have many advantages for virtualized environments, they also can add cost and complexity for smaller deployments (4-12 servers). This is one decision where we provided guidance and experience on, but ultimately left the decision to the OEM as to when blades made sense for their markets. Most OEMs who have both blade and rack-mount options and will be offering both through this program.
For CPU, all servers will have a minimum of 2-socket, quad-core processors yielding 8 logical processors. Of course, many of the servers in the program will have far more than 8 LPs, likely 12-24 will be most common as that’s the current price/performance sweet-spot. BTW - Hyper-V supports up to 64 LPs. The reason for this is that although the supported ratio of Virtual Processors to Logical Processors is 8:1, real-world experiences with production server workloads have shown more conservative average ratios. Based on that we concluded 8 LPs should be the minimum capacity starting point.
Storage
Storage is where, for me anyway, things begin to get really interesting. There are just so many exciting storage options for virtualized environments these days. Of course, it’s also a design challenge: which features are the highest priority and worth the investment? We again took a holistic approach and then allowed the partner to inject their special sauce and deep domain-expertise. Here’s the list of SAN storage features we targeted for common architecture criteria:
o High Availability
o Performance Predictability
o Storage Networking
o Storage Protocols
o Data De-duplication
o Thin Provisioning
o Volume Cloning
o Volume Snapshots
o Storage Tiering
o Automation
One of the really cool advantages of this program is that it allows for multiple best-of-breed private cloud solutions to emerge taking advantage of each vendor’s strength. You can only find this in a multi-vendor, multi-participant program.
On the Hyper-V side we provided common best-practices for Cluster Shared Volume configuration, sizing, and management as well as considered such things as MPIO, Security, I/O segregation, and more.
Network
Networking presents several challenges for Private Cloud architectures. Again here we find a myriad of choices from the OEMs and are able to leverage the best qualities of each where it makes sense. However, this is an area where we sometimes find IT happening for IT’s sake (i.e. complex, advanced networking implementations because they are possible and not necessarily because they are necessary to support the architecture). We need to look at the available products and features and only introduce complexity when it’s justified as we all know increased complexity often brings with it increased risk. Some of those items include:
o Networking Infrastructure (Core, Distribution, and Access Switching)
o Performance Predictability and Hyper-V R2 Enhancements (VMQ, TCP Checksum Offload, etc.)
o Hyper-V Host Network Configuration
o 802.1q VLAN Trunks
o NIC Teaming
NIC Teaming in particular is one of those items that can be tricky to get right being there are different vendor solutions each with potentially different features and configuration options. Therefore it’s an example of a design element that benefits greatly from the Hyper-V Cloud Fast Track program taking all the guesswork out of NIC Teaming providing the best-practice configuration tested and validated by both Microsoft and the OEM.
Private Cloud Management
Let’s face it, cloud computing places a huge dependency on management and operations. Even the most well designed infrastructure will not achieve the benefits promised by cloud computing without some radical systems management evolution.
Again leveraging the best-of-breed advantage, a key element of this architecture lies in that the management solution may be a mix of vendor software. Notice I said may. That’s because a vendor who is a big player in the systems management market may have chosen to use their software for some layers of the management stack while others may have chosen to use an exclusively Microsoft solution consisting of System Center, Forefront, Data Protection Manager, etc. I will not attempt to cover each possible OEM-specific solution. Rather, I just want to point out that we recognize the need and benefit of OEMs being able to provide their own elements of the management stack, such as Backup and Self-Service Portal. Some are, of course, essential to the Microsoft virtualization layer itself and are non-replaceable such as System Center Virtual Machine Manager and Operations Manager. Here is a summary of the management stack included:
o Microsoft SQL Server
o Microsoft System Center Virtual Machine Manager and Operations Manager
o Maintenance and Patch Management
o Backup and Disaster Recovery
o Tenant / User Self-Service Portal
o Storage, Network and Server Management
o Server Out of Band Management Configuration
The Management layer is so critical and really is what transforms the datacenter into a dynamic, scalable, and agile resource enabling massive capex and opex cost reduction, improved operational efficiencies, and increased business agility. Any one of these components by themselves is great, but it’s the combination of them all that qualify it as a private cloud solution.
Summary
There are several other elements I would love to delve into such as Security and Service Management, but this post could go for quite a while. I’ll leave the remainder for the Reference Architecture Whitepaper which we just published, as well as the OEM-specific Reference Architectures published by them.
I hope you found this article useful and that it sheds some light on the deep and broad collaborative effort we have embarked upon with our partners. Personally, I am very happy that this program was created and am confident it will fill a great need emerging in datacenters everywhere.
Adam Fazio, Solution Architect, Microsoft
On behalf of the Windows Server and Cloud teams at Microsoft, I’m pleased to announce that today we released Service Pack 1 for Windows Server 2008 R2 and Windows 7 – adding two new virtualization capabilities: RemoteFX and Dynamic Memory. SP1 will be made generally available for download on February 22. To learn more about RemoteFX, take a look at Michael’s Kleef’s blog. I’ll cover Dynamic Memory and a few other updates you’ll want to understand.
Let’s start with Dynamic Memory. An enhancement to Hyper-V R2, Dynamic Memory pools all the memory available on a physical host. Dynamic Memory then dynamically distributes available memory, as it is needed, to virtual machines running on that host. Then with Dynamic Memory Balancing, virtual machines will be able to receive new memory allocations, based on changes in workload, without a service interruption. In short, Dynamic Memory is exactly what it’s named (I wrote a six part blog series on Dynamic Memory here: Part 1, 2, 3, 4, 5, and 6).
Why is Dynamic Memory so important?
High praise from the folks over at brianmadden.com:
I do think that, looking at memory management from a VDI perspective, Hyper-V fits the bill just as well as ESX does, if not better.
Is Hyper-V Dynamic Memory any good for VDI? Definitely! I love it.
Making the most of Dynamic Memory can really be worth your while. In fact Microsoft has seen improvements of up to 40% (!) in density for VDI workloads.
With VMware it's also easier to oversubscribe the physical memory of the host (note how I didn't use the word overcommit!) and I think that's a risk in most current VDI deployments. No matter how you slice it or dice it, when RAM is oversubscribed it introduces a higher probability of paging. This in return means a huge increase in IOPS. I guess it should go without saying that this is something you should avoid at all costs in VDI environments.
Dynamic Memory takes Hyper-V to a whole new level. Dynamic Memory lets you increase virtual machine density with the resources you already have—without sacrificing performance or scalability. Ultimately it helps customers get the most bang for their technology bucks, which is a critical part of Microsoft’s virtualization and infrastructure strategy. Without that, you’ll keep pouring money into complex solutions you might not need.
Dynamic Memory and Virtual Desktop InfrastructureAlong the lines of determining what’s critical, in our lab testing, with Windows 7 SP1 as the guest operating system in a Virtual Desktop Infrastructure (VDI) scenario, we saw a 40% increase in density from Windows Server 2008 R2 RTM to SP1. We achieved this increase simply by enabling Dynamic Memory. More importantly, this increase in density didn’t require the user to make changes to the guest operating system at the expense of security, as is the case with competitive offerings.
Full stop. I want to reemphasize that last sentence.
Let me explain. In our testing of Dynamic Memory, we’ve also been reviewing VDI deployments and best practice guidance offered by VMware and others. We’ve seen some interesting ideas, but unfortunately we’ve also seen some questionable (if not terrible) suggestions such as this one that we’ve heard from a number of VMware folks: Disable Address Space Layout Randomization (ASLR).
The Importance of ASLRASLR is a feature that makes it more difficult for malware to load system DLLs and executables at a different location every time the system boots, as a way to find out where APIs are located. Early in the boot process, the Memory Manager picks a random DLL image-load bias from one of 256 64KB-aligned addresses in the 16MB region at the top of the user-mode address space. As DLLs that have the new dynamic-relocation flag in their image header load into a process, the Memory Manager packs them into memory starting at the image-load bias address and working its way down.
ASLR is an important security protection mechanism introduced in Windows Server 2008 and Windows Vista. ASLR has helped protect customers from malware and has been further improved in Windows Server 2008 R2 and Windows 7. Best of all, you don’t need to do anything to take advantage of ASLR: It’s enabled by default, it’s transparent to the end user and it just works. In fact, third parties agree that Windows 7 has taken another massive leap forward:
Sophos Senior Security Advisor Chet Wisniewski says "ASLR was massively improved in Windows 7. This means that libraries (DLL’s) are loaded into random memory addresses each time you boot. Malware often depends on specific files being in certain memory locations and this technology helps stop buffer overflows from working properly."
For the record, Microsoft does not recommend disabling ASLR. So, why would anyone recommend disabling ASLR? Read on.
Project VRCLet’s take a look at a report performed by an independent third party, Project Virtual Reality Check (VRC).
The folks at Project VRC have developed their own test methodology and have been working in the industry to better understand the complexities of virtual desktop and remote desktop session capacity planning and deployment. In their latest tests, “Project VRC Phase III (here),” the Project VRC team specifically tested enabling and disabling ASLR to see how it impacted VMware’s density. So what did they find?
Project VRC Phase III, Page 35It must be noted that Project VRC does not blindly recommend disabling ASLR. This is an important security feature, and it is enabled by default since Windows Vista and Windows [Server] 2008 (Windows XP and Windows [S]erver 2003 do not support ASLR). However, with VDI workloads, the impact could be potentially larger. Every desktop session is running an individual desktop OS instance. In comparison to Terminal Services, a VDI workload runs a magnitude of OS’s more to serve desktops to end-users. Potentially the performance impact of ASLR could be larger.
Project VRC evaluated the impact of ASLR on a Windows 7 desktop workload (120 VM’s pre-booted, 1GB memory, 1vCPU per VM, 2GB Page file fixed, VRC optimizations, ESX 4.0 Update 2, HIMP=100):
Figure 1: VMware overcommit doesn’t work well with ASLR
By disabling ASLR, the VSImax score was 16% higher. In comparison to the 4% increase witnessed on Terminal Services, the increase in capacity with Windows 7 VDI workloads is significantly higher. This does not come as a total surprise: the amount of VM’s running is also significantly higher. Although it is difficult to generally recommend disabling ASLR, the impact on Windows 7 is considerable.
In short, VMware recommends disabling a fundamental security feature in Windows because their Memory Overcommit doesn’t work well with ASLR. Not a good idea. Let’s see how Hyper-V R2 SP1 Dynamic Memory fares.
Hyper-V R2 SP1 Dynamic Memory & ASLRWe decided to perform similar tests (not identical so please don’t make a direct comparison with the VMware data; the hardware was different) using the same Project VRC Phase III test methodology. The point of this test was to compare running Windows 7 as a Hyper-V guest with and without ASLR enabled in the guest OS and to compare the delta of running with ASLR enabled. With VMware there was a considerable delta. What about with Hyper-V?
Here are the results:
Figure 2: Hyper-V works great with ASLR
You can see that with Hyper-V and ASLR, the results are virtually identical whether ASLR is on or off. That’s because Dynamic Memory was designed from the ground up to work with ASLR and other advanced memory technologies. You won’t hear anyone from Microsoft suggest you turn off ASLR.
Personally, I am convinced Dynamic Memory is a big step forward. I say this because it literally changes the way I create and deploy virtual machines (VMs). I assign the VM its startup value and then I simply don’t worry any more. Dynamic Memory effectively solves the problem of “how much memory do I assign to my server?” as discussed here. The approach is both efficient and elegant.
I should also point out that Hyper-V Dynamic Memory will be available in Microsoft Hyper-V Server 2008 R2 SP1, the free download of the stand-alone hypervisor-based virtualization product.
In addition to SP1, we’ve been very busy with our virtualization technology updates and want to be sure you’re aware of the latest:
Higher Virtual Processor to Logical Processor Ratios: If you’re running Windows Server 2008 R2 SP1 and running Windows 7 as the guest, we’ve upped the ratio of virtual processors to logical processor from 8:1 to 12:1. This is simply more goodness for VDI deployments. This change is documented here.
Higher Cluster Density and Limits: Back in June 2010, the Microsoft Failover Cluster team upped the support limit to 384 virtual machines per node to match the Hyper-V maximum of up to 384 virtual machines per server. In addition, the overall number of running VMs per cluster has been bumped to 1000 VMs in a cluster. Read more here.
New Linux Integration Services: Back in July 2010, we released new Linux Integration Services, which added support for more Linux distributions and new capabilities, including:
And while this was happening, we’ve been powering our own tradeshows (examples: MMS 2010, TechEd 2010) with Hyper-V and System Center—with tremendous benefits.
=====================================================================
P.S. Here are the links with descriptions to the six part series titled Dynamic Memory Coming to Hyper-V, and an article detailing 40% greater virtual machine density with DM.
Part 1: Dynamic Memory announcement. This blog announces the new Hyper-V Dynamic Memory in Hyper-V R2 SP1. It also discusses the explicit requirements that we received from our customers. http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx
Part 3: Page Sharing. A deep dive into the importance of the TLB, large memory pages, how page sharing works, SuperFetch and more. If you’re looking for the reasons why we haven’t invested in Page Sharing, this is the blog. http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx
My colleague David Greschler wrote a blog post for David Marshall's blog, VM Blog. DavidGr's blog post is part of a series this month on VM Blog where vendor representatives write about 2011 predictions and looking ahead. See DavidGr's post here. Following is an excerpt:
2010 was the year of the cloud. We saw some massive changes across the industry as IT decision makers and technology vendors wrestled with the shift to cloud computing. In particular, the industry had to grapple with many differing - and often conflicting - definitions of cloud computing. Certainly virtualization was often part of the discussion; however, 2010 brought a broader understanding that virtualization was no longer the end of the road, but instead a helpful stepping stone to the agile, responsive world of cloud computing.
With an understanding of the cloud possibilities established, I believe 2011 is the year that IT departments will really begin to develop their cloud plans for implementation. Gartner has estimated that worldwide cloud services revenue (including public and private services) will reach $148.8 billion in 2014.
As I see it, virtualization experts are poised to help their companies make that shift from virtualization to cloud computing and shape the cloud computing strategy that matches their needs.
Patrick
The first day’s keynote out of the way, online chatter about TechEd Europe 2010 turned to session reporting, the announced naming of Configuration Manager 2012 (formerly “v.Next”), and continued absorption of Microsoft’s initiatives in cloud computing, Hyper-V Cloud in particular.
To give you a kind of wrap up of the goings-on of the past several days, and, as previously, to give a virtual shout-out to some of the more notable voices from the community, here are some blog posts we found interesting, along with a small selection of Twitter personalities. I’m also including an aggregation of the related videos that we’ve put together, including those that TechNet Edge’s David Tesar, Joey Snow, and Adam Carter (BOMB) shot on site in Berlin….
See the System Center Team Blog for the full post.
I recommend anyone who is considering virtualizing MS Exchange read Jim Lucey's blog post and the comments below his post. VMware's HA guidance on virtualizing Exchange could be mis-interpreted, resulting in increased storage costs and placing data at risk.
Here's an excerpt:
We love that our customers are excited to deploy Exchange Server within virtualized environments. While VMware leveraged Exchange performance and sizing tools to provide guidance, their recommendations casually tiptoe around Microsoft system requirements, unnecessarily increasing storage and maintenance costs and putting customers at risk. Exchange Server 2010 provides choice and flexibility in deployment options. We are committed to virtualization technology deeply, and will continuously review as the various technologies evolve. We hope to do even more in the future with our roadmap. As we work to test and update guidance pertaining to Exchange running under virtualized environments, our current system requirements are in place to give customers the most reliable email infrastructure possible.
We’ve been seeing quite a bit of chatter on the “internets” about the goings-on at TechEd Europe 2010 in Berlin, and it’s great to see the diverse impressions of attendees as the event happens.
The announcements and news about a range of things—including Hyper-V Cloud, the VMM SSP 2.0 release, and RC availability of Forefront Endpoint Protection 2010—garnered comment and tweets aplenty. Here are a few from the community that stood out for us among the hubbub of official announcements and journalist/analyst reports (in no particular order)….
Check out the full post on the System Center Nexus blog for the rest of the story.
- Server & Cloud Platform Team
Hi, I’m Michael Kleef, senior technical product manager within the Windows Server and Cloud division.
As Brad Anderson and I discussed at the TechEd Europe keynote today, Dynamic Memory, a new feature in Windows Server 2008 R2 SP1, can increase Virtual Desktop Infrastructure (VDI) densities by 40% compared to Hyper-V in Windows Server 2008 R2 and also well above a leading industry solution. It’s also not just a benefit to VDI. Our Technology Adoption Program (TAP) customer data also highlights that other server workloads benefit from Dynamic Memory with gains of between 25% and 50% depending on the specific workload and usage pattern.
This data point came from a series of tests, on different hardware vendor platforms, that we have run at scale in our test labs in Redmond. To provide some additional technical details, I want to take a moment to explain what we focused on in testing and provide a high level summary of the test methodology we used, and the results we found. In the near future, we expect to release a whitepaper that goes into more detail of the tests including specific opportunities to increase performance and response.
Scope
From previous capacity planning data we already knew that Disk IO is the first bottleneck to be hit in VDI performance, followed by memory as a ceiling to density (not necessarily performance), and finally processor, last of all.
Our primary goal was simply to understand how Dynamic Memory influences the memory ceiling to density, and realistically, by how much. Secondary goals were to understand how XenDesktop 4 functioned with Hyper-V R2 SP1 and its different approach to storage using Citrix Provisioning Services.
Test Framework
As part of this test we wanted to avoid using internal Microsoft test tools and instead use an industry standard test framework that the bulk of the industry is currently using. We chose Login Consultants LoginVSI test framework. More details can be found here, though essentially this test toolkit attempts to mirror a user behavior through automation by starting various applications, entering data, pausing, printing, opening web content and then looping the test as many times as necessary to gain an idea of maximum system performance. It provides for different user profiles from light use, to medium and heavy user use. We chose the medium use profile, as this is primarily what others in the industry tend to test against. In our first pass of the tests, we set up a basic test infrastructure to get an early glimpse into how we performed. We initially used a HP DL 380 G5 server with a Dual Quad Hyperthreaded (Nehalem) processors, 110GB of RAM and an iSCSI target that had a 42 disk shared storage array. The storage was configured as RAID 0 for maximum read and write throughput. While this server has 110GB, we did want to limit its scope to 96GB RAM. The reason for this is the price/performance curve right now is optimal at the 96GB RAM level. Beyond 96GB the price for DIMMs increases exponentially – and additionally we knew we were going to test other servers that have 96GB – so we wanted a consistent memory comparison on multiple hardware platforms. Once we had tested against the HP server, we planned to re-run the same tests against Dell’s newest blade servers. We tested Dell’s M1000e blade chassis, configured with 16 M610 blades and each having Dual Hex Core (Westmere) hyper-threaded processors, 96GB RAM and a pair of 500GB SAS drives. This was connected to a pair of Dell EqualLogic 10GBe SANs – one SAN having 16 SAS drives, and the other having 8 SSD drives.
As part of this test we wanted to avoid using internal Microsoft test tools and instead use an industry standard test framework that the bulk of the industry is currently using. We chose Login Consultants LoginVSI test framework. More details can be found here, though essentially this test toolkit attempts to mirror a user behavior through automation by starting various applications, entering data, pausing, printing, opening web content and then looping the test as many times as necessary to gain an idea of maximum system performance. It provides for different user profiles from light use, to medium and heavy user use. We chose the medium use profile, as this is primarily what others in the industry tend to test against.
In our first pass of the tests, we set up a basic test infrastructure to get an early glimpse into how we performed.
We initially used a HP DL 380 G5 server with a Dual Quad Hyperthreaded (Nehalem) processors, 110GB of RAM and an iSCSI target that had a 42 disk shared storage array. The storage was configured as RAID 0 for maximum read and write throughput. While this server has 110GB, we did want to limit its scope to 96GB RAM. The reason for this is the price/performance curve right now is optimal at the 96GB RAM level. Beyond 96GB the price for DIMMs increases exponentially – and additionally we knew we were going to test other servers that have 96GB – so we wanted a consistent memory comparison on multiple hardware platforms.
Once we had tested against the HP server, we planned to re-run the same tests against Dell’s newest blade servers.
We tested Dell’s M1000e blade chassis, configured with 16 M610 blades and each having Dual Hex Core (Westmere) hyper-threaded processors, 96GB RAM and a pair of 500GB SAS drives. This was connected to a pair of Dell EqualLogic 10GBe SANs – one SAN having 16 SAS drives, and the other having 8 SSD drives.
Figure 1: The density tests we ran against the Dell blades had several phases: · The first phase was to replicate the same test we ran on the HP hardware, to confirm the initial density results. · The second phase was to introduce Citrix XenDesktop onto the single blade to understand single blade scale, with different storage architecture. · The third phase was to re-test the Dell Reference Architecture (RA) using Hyper-V R2 and Citrix XenDesktop and see what the resultant difference in density was. We tested their 1000 user reference architecture portion of that RA.
Figure 1: The density tests we ran against the Dell blades had several phases:
· The first phase was to replicate the same test we ran on the HP hardware, to confirm the initial density results.
· The second phase was to introduce Citrix XenDesktop onto the single blade to understand single blade scale, with different storage architecture.
· The third phase was to re-test the Dell Reference Architecture (RA) using Hyper-V R2 and Citrix XenDesktop and see what the resultant difference in density was. We tested their 1000 user reference architecture portion of that RA.
Summary Results I don’t want this blog post to labor beyond the top line points and so I’ll save the details for the whitepaper that will come later, including full performance monitor traces. I will keep the summary results to the two core interest areas – how dynamic memory affected density and whether response was affected. If you are at TechEd Europe, I will be presenting on VDI and will be sharing quite a bit of detail in the below results. The session code for that is VIR305 and is scheduled for the 11th November at 9am. Memory Results
Summary Results
I don’t want this blog post to labor beyond the top line points and so I’ll save the details for the whitepaper that will come later, including full performance monitor traces. I will keep the summary results to the two core interest areas – how dynamic memory affected density and whether response was affected. If you are at TechEd Europe, I will be presenting on VDI and will be sharing quite a bit of detail in the below results. The session code for that is VIR305 and is scheduled for the 11th November at 9am.
Memory Results
With all tests based on installed memory of 96GB, we wanted to ensure there was sufficient spare capacity to allow for bursts in memory usage. We chose to stay around the Dell RA spare capacity of around 10GB, to keep that as a consistent of the baseline measurement of Hyper-V R2 RTM. That meant we needed to stay around an 87GB maximum allocation.
Previously in that reference architecture, Dell achieved 85 VMs, with each Windows 7 VM configured with 1GB RAM, the recommended minimum for Windows 7.
However because we are now using Dynamic Memory we can change the VM start-up to 512MB of RAM and allow dynamic memory to allocate as necessary. This change is already documented on TechNet and will be supported at release.
By allowing Dynamic Memory to take control of memory allocation, this total load on the single server test up to 120VMs with a 40% increase in density. Each VM averaged around 700MB RAM running the LoginVSI workload and these results were consistently confirmed on the Dell blade testing also. When we scaled this out on the complete Dell RA, we took a reference architecture that previously ran on 12 blades, down to 8, with an easily calculable corresponding drop in cost per user/VM.
System Response
There’s no point in scaling up significantly, if the hypervisor and hardware can’t sustain the resultant IO pressure that is added by 40% more VMs. On the below HP DL 380 response chart produced by LoginVSI, we comfortably achieved 120 VMs, without any processor issues. This result is heavily dependent on the SAN infrastructure to sustain the required IOPS, and in this instance we also never saw an excessive disk queue length that would indicate an inability to keep up with load. In this test we also never hit VSIMax, which is an indicator of maximum server capacity.
Figure 2: LoginVSI test chart –HP DL 380 G5 reference test
This test result was reproduced with the Dell M610 blades and improved with Citrix XenDesktop as we scaled out the solution using Provisioning Services.
Closing
Dynamic Memory can add significant density to all workloads. While today we have shown you how VDI benefits significantly from Dynamic Memory, shortly you will see case studies from us that show in various workloads customers can expect to see between 25% to 50% increases in density in production deployments. For further information on Dynamic Memory, Jeff Woolsey has made some great comments on this already, and just how much better its architecture is in comparison to the competition. Watch out for the whitepaper which we will also announce via this blog in the near future. Michael Kleef
Dynamic Memory can add significant density to all workloads. While today we have shown you how VDI benefits significantly from Dynamic Memory, shortly you will see case studies from us that show in various workloads customers can expect to see between 25% to 50% increases in density in production deployments. For further information on Dynamic Memory, Jeff Woolsey has made some great comments on this already, and just how much better its architecture is in comparison to the competition.
Watch out for the whitepaper which we will also announce via this blog in the near future.
Hello from Berlin. Microsoft TechEd Europe started today with Brad Anderson's keynote. Aside from the keynote, you can read the news release here.
There are several items that you'll be interested to know if interested in private cloud computing, virtualization and systems management. The Hyper-V Cloud program is the main item. Here's a few of the several components of the program.
Hyper-V Cloud Fast Track program. Oliver wrote about is at the Windows Server Division blog. This reference architecture, available from any of the 6 major OEMs, reduces the risk and time to private cloud creation and deployment. These OEM partners represent more than 80% of the worldwide server market, so we likely have you covered. The Microsoft components to the reference architecture include Windows Server 2008 R2 Hyper-V, System Center Operations Manager, System Center Virtual Machine Manager, and options include SC Configuration Manager, SC Data Protection Manager and SC Virtual Machine Manager Self-Service Portal 2.0. Adam Fazio, an architect within Microsoft Consulting Services, will have a blog post tomorrow to detail the reference architecture. To whet your appetite, here's some scoop:
Hyper-V Cloud Deployment Guides. For people who want to build their own private clouds on Microsoft technology. These guides are based on over 1,000 Microsoft Consulting Services engagements over the past couple years. These guides are for you that want the highest levels of flexibility, control and customization. See more here.
Hyper-V Cloud Service Provider program: This program is the next version of the Dynamic Datacenter Alliance, which we introduced to service providers a couple years ago. More than 70 service providers covering more than 30 countries (and there's nearly 100 more coming) offer infrastructure as a fully-hosted service built on Windows Server Hyper-V and System Center, and the Dynamic Datacenter toolkit. This toolkit provides on-demand VM provisioning, sample portal and prescriptive guidance. Some service providers already on board include Terremark, Hosting.com, Hostway, Outsourcery, Strato.
Some of you might be wondering ... how is this different than vBlock? I'd say three reasons:
· There’s proven private cloud solutions from 6 different system vendors, using an open framework.
· Customers can get to market quickly – we’ve heard of the laborious deployments with v1 of vBlock.
· Customers will use consistent and familiar technologies, and can manage IT services and applications with a common management suite.
Beyond these three reasons, we also deliver identity, database tools, application development tools, and management tools that span across private cloud and public cloud, to include Windows Azure. You have the control to build, run, migrate and extend apps to Windows Azure.
For you channel partners out there, if you don't know about the Microsoft and Citrix V-Alliance, then speak with either your Microsoft or Citrix contacts. The program is rolled out in Europe, is rolling out in the U.S., and will kick-off soon in Asia. The website is here.
Here's a brief video that I recorded with Citrix's Klaus Oestermann today from VMworld Europe 2010. If you're a channel partner in Copenhagen, and want to learn more, stop by MIcrosoft booth #69, Citrix booth #80, or tonight at the Microsoft Tweetup here. [note: O’Learys Sports Bar is in Copenhagen’s Central Station. On S Train route maps, look for København H, which is Central Station. Once you’ve arrive Central Station, O’Learys is in the far corner on the right when you enter the main hall from the train platforms.