I have expanding the content below into 5 posts for ComputerWorld Canada’s Blogging Idol competition, if you’re interested in this content, you will find my analysis there more thorough; part 1 introduction and mainframes, part 2 servers, part 3 desktop (where I got some good comments), part 4 embedded, and last but not least part 5 supercomputing or HPC. I have spent some extra energy on this subject, as I have not seen anything like this searching the internet.
Virtualization on commodity computers is reaching market maturity, even though it’s an old concept borrowed from the days when mainframes ruled the computing landscape. Today we have desktop virtualization, application virtualization, network virtualization (think VLANS), storage virtualization and so on. What most people are talking about is server virtualization, that is, carving up one physical computer into a larger number of “virtual machines” (VM) providing what appears to be the resources of a computer to a user or an application, but doing this as many times as the computer can handle. Four VMs to a computational unit (CPU – actually central processing unit) is a common ratio, but that ratio depends on a lot of things. Today, processor chips have multiple CPUs (called multi-core), and business class computers often have multiple processor chips, so you could easily run 10-20 VMs with-out overloading a computer that costs less than a few thousand dollars today. This is what has made made virtualization on COTS (Commercial Off The Shelf) computers such a strong value proposition in the last few years. Some of the key value propositions are:
- Server Consolidation: Most enterprises used to deploy one business application per compute server, because in many cases, relative to the cost of the software and the business value it brought, the cost of the computer was very small. This led to computers in corporate computer rooms multiplying like rabbits. Studies showed that most of these computers were used at less than 20% of their capacity; the operating costs of managing, housing and powering these systems far out-weighed the capital costs of purchasing the machines; so if you owned more that a handful, there was an opportunity for serious savings. The Green IT movement is all over this as 8 machines running at 20% consume a lot more electricity than 2 machines running at 80%.
- Isolation: Software inevitably crashes, whether through internal bugs or external viruses, and an application running all by itself in a VM should not crash its host computer or other VMs running there.
- Protection: Since a VM appears to the host computer as a simple file (encapsulating the application, operating system, data, configurations etc.), back-up is easy. A study by VMware identified protection as a key value proposition for small and medium businesses. Modern file systems and storage technologies can also largely automate this process with little or no impact to operational performance. Thus when disaster strikes (some one spills their coffee on the hard drive, or your office gets hit by a tornado), that image just needs to be re-started some where else; the business process supported by that VM could be back up and running in as little as a few seconds, depending on how the business continuity plan was designed.
- Portability: An application is not tied to a specific computer. When the computer becomes obsolete, it can be replaced with one that is more powerful or energy efficient.
- Heterogeneity: Some applications run with only certain operating systems, or are only supported to certain versions, so on a single computer, you can run applications in Windows XP, Windows Vista, Red Hat Linux, etc.
- Rapid Deployment: When business needs to deploy another IT service, a VM can be copied from library and be deployed in minutes. Gone are the days when you had to wait days or even weeks for a server to be installed and deployed into service.
- Security: VMs can be standardized and security updates can be more easily managed deployed. Sensitive information can also be better managed.
Virtualization technology has progressed along the technology adoption curve differently for different classes of computing. VMs on mainframes are well along the “late majority” side of the market and that’s ancient history. VMs on COTS are now well along the “early majority” side of the market driven there primarily by the first three value propositions noted above.
VMs in personal or desktop computing appear to be earlier on the “early majority” curve than COTS; I have heard that desktop virtualization is widely deployed to contractors and employees at the Government of Canada, and Fusion is popular among Apple Mac zealots who need to run Windows applications. For the GoC, values 3, 6 and 7 make sense; and 5 is the value proposition for Mac users – having two machines in one.
There are two classes of computing for which virtualization is relatively new; embedded computing and high performance computing (HPC). For now, let’s classify mobile computing (like smart phones) as a special case of embedded computing.
It seems to me that embedded computing would be in the “early adopter” phase. There is less market maturity in this case as embedded computing have more recently adopted multi-core processors, embedded computing employs a much wider variety of processor types, CPUs are increasingly embedded in customer chips and applications are customized and tightly coupled to a specific target CPU architecture. Since embedded processors have more recently adopted multi-core technology, consolidating embedded applications from multiple embedded processors onto a multi-core chip running VMs is a first step similar to the server consolidation value proposition that drove adoption of VMs on COTS.
Last but not least, my favorite computing class HPC. It’s still in the “innovators phase“, but should see a quick migration to the early adopter phase for commercial applications. This is due to the value propositions to independent software vendors (ISVs) outlined in this blog post. Improving deployment processes and reducing QA costs will help ISVs deploy better quality HPC applications at lower costs, but there will be many challenges to face. If non-HPC applications have porting issues to virtualization, as discovered by the IT department at Sick Kids hospital in Toronto, HPC applications will have issues to a greater degree. Most applications being deployed on VMs today are serial (having a single thread of execution), whereas HPC applications are almost entirely parallel, running many thread of execution simultaneously. Whether parallelism is achieved through shared memory constructs or message passing libraries, this places greater challenges to VMs and the scheduling software that must optimally allocate the computational workload.
I suspect it will take longer to advance along the adoption curve for “grand challenge” applications at large research facilities, as the performance and scalability requirements will be very challenging for virtualization to meet, and this is a small market segment.
For the newer players in the virtualization technology space, these should be interesting times, as this proven technology is re-applied to new challenges in different computing classes with unique requirements.