The continued evolution of the data center design and computing architecture has put an immense load on the underlying IT infrastructure. Organizations are struggling to minimize storage requirements, power consumption, and management costs. Virtualization technology is currently the leading choice for companies to address these challenges and increase the efficiency of their IT assets. Virtualizing the computer hardware platforms, storage devices, and computer network resources can radically transform the traditional computing to make it more scalable.
Traditional I/O Virtualization:
Enterprises have their own local area network and a storage network (SAN) to connect servers and direct-attached storage (DAS). In a multi-server cluster, the increased server efficiency resulted in a heavy load on its input/output connections. Many virtualized servers kept these I/O pipes busy but they apparently didn’t run at their full bandwidth all the time. Input/ Output Virtualization was the next big innovation in virtualization technology which replaced multiple i/o lines with a single link. NICs (Network Interface Cards) and HBAs (Host bus adapters) were substituted with their software (virtual) equivalent. Ultimately, this improved the efficiency of i/o connections so that they can run on their full capacity most of the time.
But, even with internal connections within the server virtualized to use a single physical connection, there were some problems. Firstly, separate physical links (ethernet lines) were still used to carry the storage and server cluster traffic. Secondly, a bulk of the TCP connections carried SMTP, FTP, and Telnet traffic which is not good for today’s HTTP-dominated internet because of packet loss in TCP. Lastly, multicore CPUs and server virtualization are driving the demand for even higher bandwidth network connections.
Now imagine that you have installed a 10 gigabit/second connection to support the increased storage and server traffic. But the transport layer protocol of the internet, TCP, is lossy. In this case, even with a 10Gbit/s connection, you will start getting losses which will ultimately bring your traffic speed back to the earlier 1 Gbit/s speed. A 90% under-utilization of the physical link.
Data Center Bridging (DCB):
Data center bridging (DCB) is a collection of standards developed to enhance the Ethernet protocol for use in data center environments. It creates a converged data center network infrastructure using Ethernet as the unified fabric.
The goal of DCB is to enable a lossless highspeed transport over Ethernet with which all the data center applications could run over the same physical infrastructure.
TCP is notoriously prone to data loss and is not suitable for transporting storage traffic. To overcome the losses inherent with TCP, lossless Ethernet uses the following standards:
- Priority-based Flow Control (PFC): Enhanced flow control mechanism in TCP that gives each frame priority to eliminate data frame loss due to converged network congestion.
- Congestion Notification (CN): Provides end-to-end congestion management in a timelier manner for protocols like TCP, that have native congestion control mechanisms.
- Enhanced Transmission Selection (ETS): It provides a common management framework for bandwidth assignment. It enables allocation of bandwidth to the different types of traffic sharing a converged network.
- Data Center Bridging Capabilities Exchange Protocol (DCBX): It is used for conveying capabilities and configuration of the other DCB features between neighbors to ensure that they are using the same DCB standard across the network
I/O Consolidation With FCoE:
Fortunately, there is a solution that gives finer granularity in control of bandwidth allocation to ensure it is used more effectively. Fibre Channel over Ethernet (FCoE) protocol enables the existing high-speed Ethernet infrastructure to be used for transmitting the Fibre Channel traffic.
The goal of FCoE is to consolidate I/O connections and to use a single set of Ethernet physical devices or adapters for transmitting SAN and LAN traffic. With FCoE, network and storage data traffic can be consolidated using a single network.
Advantages of FCoE:
- Fewer network interface cards, required to connect to different storage and IP networks
- Reduce the number of cables and switches
- Reduced power consumption
- Less maintenance and cabling cost
Deployment of 10Gbps FCoE networks in converged I/O ecosystem has devised a new type of server adapter called a converged network adapter (CNA). It utilizes Fiber Channel HBAs and Ethernet NICs that is capable of handling both storages I/O and LAN networking traffic.
FCoE and lossless (DCB) Ethernet are the key technologies enabling storage and network I/O convergence onto a shared I/O transport. Data center managers can realize the maximum benefits by moving lower speed LAN and SAN traffic to the new lossless 10GbE transport. Together, FCoE and DCB are building next-generation data centers that can handle large traffic. They will utilize every possible resource in the underlying network with the benefit of cost reduction with i/o virtualization.