Storage Basics: Deciphering SESAs (Strange, Esoteric Storage Acronyms), Part 2

Tuesday Dec 16th 2003 by Mike Harwood
Share:

The storage industry is rife with so many acronyms it's almost impossible to keep on top of them all. Our new Storage Basics series adds one of its own while uncovering the mystery behind many others. Part 2 of the SESAs series looks at IB (InfiniBand), FSPF, VI, and DAFS.

In the first article in our Storage Basics: Deciphering Strange, Esoteric Storage Acronyms (SESAs) series, we examined some of the more common but often little understood terms bantered about in the storage industry, including FCIP, iFCP, SoIP, NDMP, and SMI-S. In this article, we'll continue with a look at four more acronyms: IB (InfiniBand), FSPF, VI, and DAFS.

First, A Bit on Bus Architecture

Developments in I/O architecture are not difficult to trace. In the late 1980s we saw the introduction of the 8-bit expansion slots on the XT 8088s. Since that time we have seen the 16-bit Industry Standard Architecture (ISA) bus, the introduction of a 32-bit bus, the Micro Channel Architecture (MCA) on IBM PS/2 systems, the Extended Industry Standard Architecture (EISA) bus, and the Video Electronics Association (VESA) local bus that followed in 1992. All of this brings us to where we are today with the 32-bit Peripheral Component Interconnect (PCI) bus and the more recent 64-bit PCI eXtended (PCI-X) bus.

What does the history of I/O architecture have to do with modern storage needs? Quite a bit, actually. Since 1992, the PCI bus has been the standard bus architecture used with servers. However, while reviewing the history of I/O bus developments, it is clear to see that I/O developments have been somewhat sedate when compared to other technologies such as CPU power and memory bandwidth. With CPU advancements having outpaced I/O bus developments, a performance mismatch and system bottleneck has resulted.

In many organizations today, the PCI bus is overwhelmed by the use of InterProcess Communication (IPC) or clustering cards, Fibre Channel cards, and a host of other high bandwidth cards located on a single server. Because PCI uses a shared bus architecture, all devices connected to the bus must share a specific amount of bandwidth, which means as more devices are added to the bus, the bandwidth available to each device decreases.

It’s not only PCI speeds that limit the usefulness of the PCI bus in modern servers but also its level of scalability and fault tolerance. Scaling PCI requires expensive and awkward bridge chips on the board. As for fault tolerance, under most situations, when a PCI expansion card fails, the server must be taken offline to replace the card. This introduces a single point of failure and the potential for server downtime.

Enter InfiniBand (IB)

This brings us to InfiniBand, an I/O architecture designed to address the shortcomings of the PCI bus and meet the I/O needs of modern organizations. The switched fabric architecture of InfiniBand is designed around a completely different approach as compared to the limited capabilities of the shared bus.

InfiniBand uses a point-to-point switched I/O architecture. Each InfiniBand communication link extends between only two devices, which allows the devices at either end of the communication path to have exclusive access to the full data path. To expand beyond the point-to-point communication, switches are used.

Interconnecting these switches creates a communication fabric, and as more switches are added, the aggregated bandwidth of the InfiniBand fabric increases. In addition, adding switches creates a greater level of redundancy, as multiple data paths can be accessed between devices. The following table highlights the differences between the PCI shared bus and the InfiniBand fabric.

Shared vs. Fabric Bus

Shared Fabric
Topology Shared Switched
Connection Points Minimal High
Signal Length Inches Kilometers
Fault Tolerant No Yes
Scalable No Yes
Reliability Minimal Excellent

Page 2: InfiniBand Continued

So if InfiniBand is designed to address the shortcomings of PCI, is it also designed to replace PCI? While initially some felt InfiniBand could result in the demise of PCI, it is actually designed to solve needs different than those addressed by PCI.

InfiniBand focuses on server technology I/O issues, not those of the personal computer. The InfiniBand fabric is also not designed to support consumer installations of expansion cards, another point that relegates it to the server realm rather than making it ideal for consumer deployment.

Before wrapping up our discussion of InfiniBand, let’s review three primary components involved in making up the InfiniBand fabric. These are:

Host Channel Adapter (HCA): The HCA is the interface that resides directly inside the server and provides the communication between the processor, the InfiniBand fabric, and the server’s memory. The HCA can be added to a server using the PCI slot, or it can be integrated onto the system board.

Target Channel Adapter (TCA): The TCA adapter allows I/O devices such as tape storage to be part of the fabric independent of a host computer. The TCA uses an I/O controller to specify the network protocol used (Ethernet, Fibre Channel, or SCSI).

Switch: The switch is the connection point for the HCAs and TCAs. The switch regulates traffic by looking at the route header and forwarding the data to the correct location. A connected group of switches is referred to as the fabric.

Fabric Shortest Path First (FSPF)

Moving away from InfiniBand but staying within the realm of fabrics, we have the Fabric Shortest Path First (FSPF) protocol. A storage network uses redundant interlink switches to create a fully meshed fabric. This fabric is essential for ensuring high availability, high performance, and load balancing.

When data is transmitted over a fabric network there are redundant paths or routes it can travel. The FSPF protocol provides a common mechanism to allow for efficient route selection. In other words, FSPF identifies the best path between two switches in the fabric and then updates routing tables to use that path.

Those who have worked with IP networks will no doubt notice a name similarity between Fibre Shortest Path First and the Open Shortest Path First (OSPF) routing protocol used on Ethernet networks. FSPF is indeed a derivative of its IP cousin, and both are link state protocols.

FSPF is part of the Fibre Channel Protocol (FCP). A properly designed fabric requires a knowledge of FSPF in order to minimize bottlenecks, supply adequate bandwidth, and minimize connection costs.

Page 3: Virtual Interface (VI)

Virtual Interface (VI)

The next acronym up for review is the Virtual Interface (VI). VI was originally designed by industry heavyweights such as Intel, Microsoft, and Compaq as a standard for interconnecting computer clusters. As such, VI was designed to provide a common interface for clustering software regardless of the underlying networking technology. In addition, VI is designed to help eliminate the overhead caused by network communication.

The VI standard specifies a combination of hardware, firmware, and operating system driver interaction to increase the overall efficiency of network communication. In application, VI provides two key functions: reducing CPU load and reducing latency. To do this, VI allows for direct memory-to-memory data transfer.

Memory-to-memory transfer enables data transfers directly between buffers and ignores normal protocol processing. VI also allows for direct application access, which enables application processes to queue data transfer operations directly to VI-compliant network interfaces without using the operating system.

Direct Access File System (DAFS)

Fitting right in the discussion of VI is the Direct Access File System (DAFS). DAFS is a protocol that uses VI capabilities to provide memory-to-memory data transactions for clustered application servers. Using the VI architecture and memory-to-memory data transfers, DAFS avoids the traffic overhead generated by operating systems.

As a little background, TCP/IP can be quite taxing on system resources, as it requires significant CPU processing while data packets are moved through the TCP/IP protocol stack. DAFS does not require this high overhead and can move the same data packets without all of the CPU overhead.

It does this by bypassing the protocol stacks and the operating systems to directly place data on the network link. Data is moved from the application buffers directly to the VI-capable NIC. In the process, overall network utilization is reduced and application throughput is increased. This flow of data takes a significant load off of the processor.


In the past two Storage Basics articles we have taken a quick look at some of the acronyms prevalent in the storage industry today. Of course, we’ve only just scratched the surface, as there are plenty more out there, and new ones are seemingly popping up a daily basis. As these first two articles have been a response to email queries, we look forward to more emails from you and the opportunity to unravel the mysteries of more acronyms in future SESA articles.

» See All Articles by Columnist Mike Harwood

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved