The evolution of cloud-based computing has enabled enterprises to reach new heights of success. With easy scalability and cost-efficient solutions, the cloud has turned out to be a boon for IT startups. But, with the evolution of the data center and computing architecture, increasing the capacity and functionality brings challenges with it. Storage requirements, power consumption, and management costs are the primary ones. To address these challenges, virtualization is currently the leading choice for IT administrators for building private or public cloud service platforms.
We will take about the past, present, and future of input/output virtualization (IOV) in detail. Virtualization technology enables one physical connection on the server to appear as multiple virtual I/O interface.
A few years back, companies were struggling to meet the increased demand for resources. Particularly, due to the popularity of the internet and the introduction of cloud services, companies wanted increased productivity in their business activities. To meet the increased demand, companies used virtualization technology. It enabled them to increase the efficiency of their servers and networking equipment by creating virtual machines (VMs). A virtual machine has the capability to:
- Run multiple operating systems on a single hardware component
- Utilize the resources in an effective manner
- Provide a single point of maintenance
All these features offered by virtualization, ultimately led to increased hardware efficiency at a lower cost.
But, within an enterprise, each server requires access to a local network, a storage network (SAN) and direct-attached storage (DAS). An internal system bus provides access to all of these resources. Virtualization technology enabled an enterprise to run multiple virtual server instances on a single piece of hardware. The increased server efficiency resulted in a heavy load on its input/output connections.
A high-speed Peripheral Component Interconnect Express (PCIe) bus is used to connect most of the peripheral components together. In a multi-server environment, all of these I/O channels sometimes reach their peak bandwidth. With many virtualized servers running, these I/O pipes are busy but they apparently won’t run at full bandwidth all at one time. There was a need for an improvement in handling I/O connections. Something that could increase the efficiency of the system bus. This would enable it to run on its full capacity most of the time.
What if we implement virtualization technology on the input/ output connections of a server? What if, the adapters like the PCIe bus adapter could be virtualized and shared across multiple servers? What if we can substitute a network interface card (NIC) or a host bus adapter (HBA) which can increase the capability of a connection to like 4 GB, 8 GB or whatever we want?
This is the idea of i/o virtualization. It consolidates the i/o into one single connection. It makes a physical adapter to appear as multiple virtual NICs and virtual HBAs. Its goal is to minimize the performance bottlenecks caused by i/o connections.
So, to answer the question- what is present of I/O virtualization– let us explore how enterprises use it for their business productivity:
- I/O virtualization provides a significant reduction in the need for hardware resources. You can simply add more VMs to your existing server.
- It offers a significant increase in resource utilization. The traditional NICs and HBAs were not efficient enough to handle the current load on the servers.
- It reduces the cable management overheads. It replaces multiple network and storage connections to each server with a single cable that carries the entire traffic.
- Virtual I/O technologies allow dynamic scaling. The traditional i/o channels were fixed and static. By virtualizing the interfaces, an enterprise can easily add and remove a virtual connection according to its need.
- Enterprise enjoys peak network I/O performance from their switches, routers, and servers at a very low price. Why? Because they need fewer ports, cables, and adapters for their I/O connections.
The rapid transformation in the computing environment of enterprises is demanding innovation in managing the ever-growing pool of resources.
- Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB) are the new technologies used by businesses. They are slightly more efficient than the present I/O virtualization. FCoE and lossless DCB Ethernet are enabling network and storage I/O traffic onto a shared I/O transport. The FCoE and DCB adapters run on the PCI Express bus. Thus, they can be used in an I/O virtualization environment and therefore, can be shared across multiple servers easily.
- Vendors are also developing I/O virtualization solutions around InfiniBand technology. InfiniBand is a low latency cluster networking technology used for server to server communication. It can be used as the high-speed carrier for the I/O virtualization infrastructure.
- Intel has been developing Scalable I/O Virtualization (Intel® Scalable IOV). It is basically a new hardware-assisted approach to I/O virtualization. Its motive is efficient and scalable sharing of I/O devices across a large number of VMs. Intel Scalable IOV provides more scalability at a lower cost than today’s technology without compromising the performance.
Currently, I/O virtualization is seen as a viable asset for increasing server performance. But, with the rapid expansion in the production and consumption of data, new innovations in virtualization will be required in the future. I/O virtualization is still a one-stop solution for increased I/O utilization. It will continue to deliver high-performance connectivity to servers at reduced costs.