VirtualIron.com | Join | Contact Us

Blog
Virtualization Blog
Virtualization Blog - Alex's Blog
Decrease font size
Increase font size
December 14, 2006
  First VMware ignores you, then VMware laughs at you, then VMware fights you, then you win

One of our customers forwarded me a FUD e-mail that VMware has been sending to their customers comparing Virtual Iron 3.1 with VMware. This e-mail came to my attention right after our new version 3.1 announcement. It is a fascinating piece of e-mail.


I am not going to bother you with the entire e-mail. It is full of errors and mistakes...an obvious rush job. However, I will share some of the most egregious parts.


VMware: "Virtual Iron only runs on the latest servers equipped with virtualization hardware assist features. That requirement forces users to invest in the newest, most expensive servers, even for test and development pilot deployments."


Virtual Iron: We are fully utilizing the latest advances made by the chip vendors (Moore's Law marches on). Hardware enhancements to Intel® and AMD® processors improve software-based virtualization solutions. These chips go into servers from Dell, HP and IBM. I have no idea where VMware got the idea that these are the "most expensive servers" -- I just checked prices at Dell.com and, for the price of just VMware's ESX Server ($5750 per box), you can buy two Dell servers with VT and Virtual Iron 3.1 Enterprise Edition virtualization software and still have money left over to go to In-n-Out Burger.


VMware: "Virtual Iron supports only a small set of guest operating systems - RHEL 4 Update 2 (32 or 64-bit), SLES 9 Service Pack 3 (32 or 64-bit), Windows Server 2003 (32-bit) and Windows XP (32-bit). In contrast, VMware Infrastructure supports over 60 different versions of Windows, Linux and NetWare operating systems."


Virtual Iron: That's true and, thankfully, you are probably using one that we support. With emerging businesses, the 90/10 rule rules - we focus on the servers used by the majority. This means that we don't have to burden our cost structure with an obscure OS.


VMware: "Virtual Iron is missing a distributed clustered file system like VMware's VMFS. This puts every virtual machine at risk of disk corruption when placed in shared storage."


Virtual Iron: Due to our system architecture, we do not need a clustered file system to perform migration of virtual machines. When we were designing the LiveMigration feature into our product, we looked at putting a clustered file system to support shared storage (a prerequisite for migrate). We know a thing or two about clustering. Some of our technical folks invented the early clustering systems. During development, we checked out VMware's VMFS and come to conclusion that a clustered file system is the last thing we want. Why? It burdens you, the user, with heavy-duty maintenance and administration. So, instead, we built a clever mechanism that takes away that burden. With Virtual Iron, LiveMigration works on a regular file systems - without the need for clustering and without causing corruptions!


Of course, I could go on and on about the virtues of our product. I'm very proud of it. I could also go on and on about the errors and mistakes in the VMware e-mail. I won't. You can compare us to Vmware for yourself. Our Single Server Edition is absolutely free, so you have nothing to lose.


    Posted By: Alex V @ 12/14/2006 02:54 PM     Alex's Blog     Comments (0)  

December 11, 2006
  Virtual Iron 3.1 Released: Windows Support, Free Version, More

We announced today that Virtual Iron 3.1 has been released. Among all of the excitement that surrounds any product release, there are three items that are particularly exciting:


- Virtual Iron 3.1 provides full support for Windows XP and 2003. We're not just Linux anymore.


- Virtual Iron 3.1 Enterprise Edition is just $499 per socket -- that's less than 20% of the cost of VMware. Virtualization software that costs less than a server? That's the way it should be!


- Two free versions of Virtual Iron 3.1 are available to make it as easy as possible for you to evaluate and compare to VMware. Click here to learn more about these free versions.


We're extremely proud of this release and hope that you'll take the time to try Virtual Iron 3.1.



Edited: 12/11/2006 at 05:13 PM by EvanK

    Posted By: Alex V @ 12/11/2006 05:11 PM     Alex's Blog     Comments (0)  

July 7, 2006
  Paravirtualization is a Dead-End Approach
Paravirtualization (as an approach to run virtual machines) has been getting a lot of attention lately. The primary hypothesis is that paravirtualization makes the job of virtualization easier and more efficient than pure hardware virtualization.

There was a time and place where paravirtualization made sense, but that time has now passed. Let me explain why it is no longer a viable approach.

Paravirtualization (an idea orginally introduced by the Denali Project) proposes to modify a guest OS in order to redirect virtualization sensitive operations directly to the virtual machine monitor instead of trapping to it as done in pure hardware virtualization. Pure hardware virtualization is where hardware is completely virtualized and supports unmodified guest OSs.

Paravirtualization requires substantial engineering efforts in modifying and maintaining an operating system. However, these heroic efforts are inevitably losing the battle against Moore's Law and hardware advances being made in the x86 space. By the time the first product with paravirtualization appears on the market, more than 80% of the shipping x86 server processors from Intel and AMD will have hardware-based virtualization acceleration integrated into the chips (Intel-VT and AMD-V or "Rev-F"). This hardware-based acceleration is designed to optimize pure virtualization performance, primarily the virtualization of CPU, and it renders OS paravirtualization efforts as completely unnecessaryand behind the technology curve. Perhaps more important, hardware-based virtualization doesn't require users to upgrade their operating systems, or their middleware and application stacks - a very complex and resource-intensive undertaking.

The issue with paravirtualization is that once you break the strict hardware interface between the hypervisor and the guest OS, one might as well call the "hypervisor" a microkernel and a guest OS a process and move on. And we all know how "well" an OS works with a microkernel. Linus Torvalds posted some comments on the recently flamed up (again) debate on the merits of a microkernel, stating, "It's ludicrous how microkernel proponents claim that their system is 'simpler' than a traditional kernel. It's not. It's much, much more complicated, exactly because of the barriers that it has raised between data structures." I concur with Linus.

Virtual Iron has decided against paravirtualization in favor of "native virtualization.". With hardware advances coming out of Intel and AMD, we see native virtualization capable of matching physical hardware performance without any of the complexity and engineering efforts involved in paravirtualizing an OS. From our discussions with a broad range of users, they simply do not want to roll out modified OSs unless the trade-off is heavily in their favor. This Faustian trade-off is no longer necessary.

The current batch of chips offering hardware-based virtualization acceleration from Intel and AMD, primarily helps with virtualization of CPUs and very little for virtualizing an I/O subsystem. To improve the I/O performance of unmodified guest OSs, we are developing accelerated drivers. The early performance numbers are encouraging. Some numbers are many times faster than emulated I/O and close to native hardware performance numbers.

Just to give people a flavor of the performance numbers that we are getting, below are some preliminary results on Intel Woodcrest (51xx series) with a Gigabit network, SAN storage and all of the VMs at 1 CPU. These numbers are very early. Disk performance is very good and we are just beginning to tune performance.

Bonnie-SAN - bigger is better             Native                  VI-accel
Write, KB/sec                                    52,106                 49,500 
Read, KB/sec                                    59,392                 57,186 

netperf - bigger is better              
tcp req/resp (t/sec)                             6,831                   5,648

SPECjbb2000 - bigger is better 
JRockit JVM thruput                            43,061                 40,364

One we complete the tuning, we expect guest OSs to compare very favorably to running on native hardware, but without requiring users to upgrade their OSs. This should open up users to applying virtualization to a whole new sets of applications and workloads.

I welcome your comments.

Edited: 07/07/2006 at 03:20 PM by Virtualization Blog Moderator

    Posted By: Tim Walsh @ 07/07/2006 03:17 PM     Alex's Blog     Comments (4)  

June 15, 2006
  Virtualization is not an OS feature
Integrating virtualization into the operating system is a bad idea. You're probably wondering why bundling more features into an OS isn't a good idea.

There are many reasons why bolting on a hypervisor to an OS is a bad idea.

First, it requires you to upgrade to the latest operating system version, such as the upcoming Linux OSes (Red Hat 5, SLES 10) to get virtualization capabilities. How long did your last operating system upgrade take? It also means that you have yet another operating system to manage per server. Who is going to pay for the additional calories and the redirection of scarce system administrator focus from more strategic activities that advance the business to patch management and other administrative tasks for these new operating system instances?

The operating system vendors are taking a very parochial view towards their support matrix. As in, "We'll support our latest operating system version, and maybe one version prior." What if you want to run Windows and Linux side by side? Silly you say? Not for disaster recovery scenarios... there you'll need every ounce of hardware and then some to keep your systems running.

Operating system vendors like to add features and force their customers to upgrade to the latest versions. This is exactly the wrong approach for virtualization. Virtualization solutions should provide the absolute necessary capabilities in the virtualization stack and put anything that is not related to running operating systems efficiently outside the OS, in an external to the OS management system. This allows capabilities such as workload management and business continuity to work across operating systems and physical resources.

It's no surprise that the OS vendors are jumping into the virtualization space. Before jumping in with them, ask how much effort it's going to take, and whether it meets the reasons you're deploying virtualization. We commonly see companies start with server consolidation, then rapidly move to deploying virtual servers from "golden images", followed by business continuity and ultimately policy-based automation and dynamic workload management. Outside of trying to consolidate new OS installations (an oxymoron -- how can you possibly consolidate new OS installations?), it's not clear what value the OS vendors can provide to users' virtualization goals.

    Posted By: Alex Vasilevsky @ 06/15/2006 07:51 PM     Alex's Blog     Comments (0)  

April 28, 2006
  The New Economics of Virtualization
Alex Vasilevsky
Founder and CTO
Virtual Iron Software

Welcome to my first blog. Obviously, virtualization is a white hot technology space, but there is a lot of confusion about it and still too much unfulfilled promise. My goal here is to paint a bigger picture about the possibilities of server virtualization and how it will change the IT and the server industry as we know it today.

Because this is the first entry, I'm going to start at a high level. My hope though is that we can quickly dig deeper into topics such as the future of the server industry and the impact that consolidation and virtualization will have on it. I'd also like to discuss what's holding users back from getting more value out of their existing virtualization solutions and how open source virtualization and innovation will usher in monumentous changes within the data center.

I'd also like to hear your ideas about the same.

But first, what is virtualization?

In the context of this blog, when I talk about virtualization I am referring to server virtualization.

In general server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing sharing and utilization.

Usually server virtualization software runs between the operating system and the hardware; this software creates "virtual servers" that appear to an operating system as hardware, but in actuality are a software environment. This way multiple copies of an operating system think they are each running on their own server, when in reality they are all sharing one physical computer.

Server virtualization is not new; during the 1960s IBM pioneered the use of virtualization to allow partitioning of its mainframes. However the technology got relegated to the dusty heap of computing history as cheaper servers and PCs became widespread.

And then the resurgence started and virtualization is now on the cusp of becoming a mainstream technology. The question is why is there this sudden resurgence in the old technology?

The answer is quite simple - too many underutilized servers, taking up too much space, consuming too much power, and at the end costing too much money. In addition, this server sprawl become a nightmare for over-worked, under resourced system admins.

To save money, companies are consolidating their data centers and standardizing the applications they run their businesses on. The number one project in most IT shops is data center consolidation. Nicholas Carr in his blog asks if the server industry is doomed (an interesting topic for another day), and provides some great data. For example he states that:

"The chemicals giant Bayer, for instance, has been consolidating its IT assets worldwide. In the U.S. alone, it slashed its number of data centers from 42 to 2 in 2002, in the process cutting its server count by more than half, from 1,335 to 615."

The technology that helps companies consolidate their data center is server partitioning - the ability to carve up one physical server into many virtual servers. This allows data center managers to place multiple virtual servers, each with its own unique operating system instance, on a single physical server. By doing this, they can consolidate their physical infrastructure, preserve their investment in existing operating systems and applications, and get more from their hardware investments. The potential benefits and ROI of virtualization are numerous.  It can help control hardware and operational costs and it promises to deliver new levels of agility, manageability, and utilization.

Most companies are beginning to buy into its potential and making significant investments, but are many getting their money's worth?  Is the pace of innovation keeping up with the opportunity to solve the critical business problems we're dealing with in the data center? 

I say no way. We're not even close to scratching the surface of what virtualization can truly do. Existing solutions are inflexible and expensive - sometimes even more then the cost of a new server. On top of it all, they don't perform well.

Despite all the hype, the IT executives we speak with continue to be frustrated with the lack of solution options and with the slow pace of innovation in a software market that is controlled by a small number of vendors.  The lack of choice gives these vendors significant pricing and account control.  Compounding matters, once users do select one of the few commercial alternatives, they are locked into that proprietary solution.  Some solutions also require modifications to the operating system - not something to ever be taken lightly.

Fortunately, emerging alternatives and increased vendor competition are offering users new choices while driving the cost of the software down.

For example, Xen, an oft-hyped open source project, and an emerging low-level software for allowing multiple OSs to share the same physical hardware, has the potential to give users better performance at much more attractive price points. Xen has a broad ecosystem (Virtual Iron is one of the key contributors) that includes all the major processor manufacturers, server companies and operating system providers.  These companies are working together to deliver virtualization functionality that is based on broadly adopted industry standards. This community has formed an extended testing team, further driving quality improvements. The entire ecosystem is benefiting from having so much talent contribute to the development of Xen-based solutions and running the software through its paces. Open source technologies like Xen have a history of providing improved functionality, better performance, and lower total cost of ownership than proprietary technologies.  Unlike proprietary technologies, Xen is free and as a result, will rapidly make its way into commercial offerings and end user solutions. Costs will come down, making it more cost-effective to deploy virtualization to every server throughout an enterprise's IT infrastructure.

Indeed, history also shows that open source offerings, when generally accepted, tend to catch up fast to their proprietary counterparts. Although the current proprietary offerings have a few years head start on Xen, we expect that gap to close quickly. Hypervisor support for chip-assisted virtualization quickly negated several years worth of VMWare's development efforts. In addition, the Xen project and ecosystem have clearly reached critical mass and the Xen hypervisor is emerging as the de facto standard base to be used in server virtualization. The tidal wave of innovation has begun and it opens up a whole new set of alternatives and economic opportunities for users.

Edited: 05/01/2006 at 07:17 AM by alex

    Posted By: Alex Vasilevsky @ 04/28/2006 04:07 PM     Alex's Blog     Comments (1)  

FuseTalk Standard Edition - © 1999-2007 FuseTalk Inc. All rights reserved.


Copyright © 2003-2007 Virtual Iron Software, Inc. | Privacy Statement | Terms of Use | Site Map