default blog poster

Every once in a while, I get asked why an organization should virtualize voice. On the surface it makes no sense to virtualize voice. It adds optional/additional technology to an already complex and mission-critical solution. It may not result with any cost savings, and it requires a higher degree of technical know-how. So, it’s a reasonable question.

Now I won’t suggest that every organization should virtualize, but there are legitimate reasons to do so.

There are many situations in which a basic appliance on dedicated hardware makes perfect sense. Appliances are fool proof—they inherently protect an organization from over-allocation of resources. Every UC solution has numerous interrelated constraints like total users, agents, trunks, etc. An appliance prevents over-allocation with built-in limitations. Hardware support is also simplified. When things break, just call the dealer and ask for a new thingamajiggy. There are no questions about operating systems, sizes or connector types.

The appliance is a viable and reasonable option. The question is when and why it may make sense to virtualize.

Throughout the history of the modern office, voice was a dedicated and proprietary application. The “servers” were dedicated with proprietary boxes connecting users and networks. On the user side, dedicated wiring was involved to mostly dedicated and proprietary terminals (phones). On the network side, specialized carriers using proprietary protocols and services connected to the servers with special connectors also over dedicated wires.

Since the early 2000s that model has been (slowly) changing. Most voice traffic is now transmitted over shared IP networks. The proprietary servers are being replaced with industry standard boxes. This convergence of technologies enables features like softphones, unified messaging, IM to voice, and click-to-dial. This is not going to be undone; voice and data will co-exist on networks, servers and endpoints for the foreseeable future.

Years ago, the data center had separate servers for everything. The result was “server-sprawl,” and big efforts were made to consolidate them. Initially, this fell onto the vendors. Mitel, for example, consolidated several servers into what became MiCollab.

Server sprawl was reduced, but customers realized that they wanted to be in control of server consolidation. With most applications running on industry standard servers, customers wanted to determine which applications ran on which systems. Virtualization made this possible by converting servers into server resources. As resources, customers had unprecedented control over which applications consumed which resources.

The problem with dedicated servers is they are not very efficient. When they get busy, they don’t have enough capacity or resources, and when they are underutilized, the available resources go to waste. Virtualization allows an organization to dynamically reallocate resources as needed. Hardware resources can go to a real-time application during business hours and then be shifted to batch-mode operations at night.

Since hardware was just a resource, applications could be moved among configurations, systems and locations. Virtualization was less about saving money on fewer machines and more about efficiency in operations. For example, when applications outgrew resources, instead of doing server rebuilds resources could be upgraded or replaced.

Initially, real-time applications weren’t invited to the virtualization party. Real-time applications need predictable hardware resources, and virtualization was inconsistent at best. Good for email, bad for voice. Dedicated servers continued as the only option.

IT organizations began going all-in on virtualization. Voice and UC were the big holdouts due to real-time requirements. However, that wall got shattered. Mitel and VMware jointly announced the first support for real-time applications on virtual servers in 2009. The breakthrough was possible with improved chipsets, improved virtualization technology and improved resource management by the application.

The tables turned. For virtual-savvy organizations, it was the appliance that became more expensive to support. Appliances represented dedicated machines that did not leverage virtualized infrastructure. They required separate management, separate backups, separate fail-over plans and separate training.

A virtualized data center is not an overnight sensation. It takes time to develop the skills, resources and operations approach. For those that are there, virtualization is strongly preferred. But not all virtualized applications play fair—some require dedicated virtual servers, or very specific configurations restricting how loads can be moved. Not all virtualized applications should cooperate with system management tools. In other words, not all virtual applications behave as well as Mitel’s solutions in a virtual environment.

Organizations that aren’t virtual-savvy should stick with an appliance or potentially a dedicated industry standard server. As virtualization becomes core to operations, virtualizing real-time applications like voice and UC is inevitable.
 

get great content like this weekly
Ready to talk to sales? Contact us.