FCoE Struggles to Gain Traction

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

On the surface, Fibre Channel over Ethernet (FCoE) seems like a no-brainer: a lot less hardware, cabling, installation, and management, and less expensive, to boot.

“When you take switches, cabling and adapters into account, FCoE is 33 percent cheaper to deploy than traditional networks and holds the promise of 50 percent savings on power and cooling,” said Bob Laliberte, an analyst with Enterprise Strategy Group (ESG).

He said FCoE is typically deployed in a “top-of-the-rack” configuration, which means there is an FCoE-enabled switch sitting on the top of the rack (instead of an FC switch and an Ethernet switch). This configuration would assume the use of converged network adapters (CNAs) or a universal LAN on motherboard (LOM) capable of supporting FCoE. If the applications warrant the support, there will usually be redundant connections — i.e. at least two FCoE connections for each server to the top of the rack FCoE switch, (as opposed to 2 FC and 2 Ethernet connections per server). The top of the rack switch would then send the Ethernet traffic to the LAN and the FC traffic to the SAN.

“Long-term predictions are that most connectivity options are focused on Ethernet-based transports,” said Laliberte.” If you look at the CNA market, the universal approach is catching on — FCoE, iSCSI, Ethernet and even server-to-server connectivity in a single card or chip.”

Similar to a unified storage system, which can accept any protocol, the associated servers would have universal connectivity adapters that are able to handle almost any Ethernet-based protocol via either chips on the motherboard or through the adapter’s cards.

Despite these advantages, users remain cool to FCoE. ESG placed adoption at only 9 percent of users at the end of 2010. So what’s going on?

What’s Not to Like About FCoE

Dennis Martin, an industry analyst with Demartek, reminds us that storage infrastructure typically changes slowly. He said that only early adopters were willing to use top-of-rack-switches to connect to existing storage networks. But the next phase, just getting under way, is being assisted by core networking support and wider adoption of FCoE adapters, as well as some FCoE storage targets. As vendors support more ways of deploying the technology, take-up rates improve. He recommends enterprises consider FCoE as part of strategic their IT initiatives.

“FCoE should be considered in long-term planning, in new equipment acquisitions and in data center build-outs,” said Martin.

ESG user surveys revealed FCoE should exceed 25 percent penetration by the end of 2012. Looking ahead, the analyst community is painting a rosier picture for FCoE.

“2011 is shaping up nicely for broader FCoE adoption on many dimensions,” said Greg Schulz, an analyst with StorageIO Group. “The vendors continue to evolve their technologies as well as enhance product maturity, feature functionality and interoperability from server adapters along with operating system and hypervisor driver support to networking switches and routers to storage systems along with associated management tools.”

He doesn’t see FCoE hitting full stride this year, though. The reason: Rather than being a quick performance fix, FCoE requires heavy duty infrastructure change-out and upgrade.

“FCoE is about a longer duration infrastructure item that will be around for the next decade that takes time to ramp up,” said Schulz.

Standing in the way are a number of factors. Some customers, he said, are worried about rushing in to purchase expensive CNAs, only to be left high and dry as the industry moves beyond the early generations of this technology.

Technology has also been a barrier. Some FCoE solutions have not been multi-hop. Without this, FCoE would have to go from the server to a single switch and then to a storage array.

“There is a lot of work being done to address this issue via various standards to enable end-to-end multi-hop,” said Laliberte.

Vendors, too, are projecting a mixed message. Some, like Cisco (followed by EMC and NetApp), are more out front pushing and driving the industry and ecosystem. Brocade, on the other hand, is supporting its legacy FC installed base while trying to assimilate its Foundry Ethernet-based acquisition. Thus, the level of push varies considerably from vendor to vendor.

Another barrier is cultural — how to manage a converged environment: Does the server, storage or networking team take responsibility, and, if so, for what?

“Organizations looking to fully take advantage of converged technologies need to look at how their technical teams can work across organizational boundaries in a virtual team or converged manner,” said Schulz. “But the promise is alluring: On a single CNA and network, you can run iSCSI on IP on your Ethernet stack, while Fibre Channel runs on your FCoE stack concurrently yet logically isolated.”

What this means is servers and environments that prefer IP and need block can use iSCSI, while others in the same environment who need block can use FCoE. It means a storage system can be configured to respond to some servers that need block access using lower cost or no adapters with iSCSI, while others can access the same system via FCoE. It gives a great flexibility benefit on both the server and storage side.

Overall, though, Schulz is bullish — FCoE is coming, and coming soon.

“FCoE has a very bright future as a technology that will be around for the next couple of decades in some shape or form, co-existing with iSCSI and SAS as well as NAS, along with object-based access,” said Schulz. “For networked storage, at the physical layer all roads lead to Ethernet; at the logical and protocol layer, while IP may be the destination long term, near-term FC moving onto LAN is a giant step for SAN-kind.”

Drew Robb is a freelance writer specializing in technology and engineering. Currently living in California, he is originally from Scotland, where he received a degree in geology and geography from the University of Strathclyde. He is the author of Server Disk Management in a Windows Environment (CRC Press).

Follow Enterprise Storage Forum on Twitter

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.