Share
By Steve McMillan, SBS Technologies
For real-time embedded systems that need high-bandwidth communications, Ethernet is the communications link most designers think of first. Yet protocol overhead and latency can ruin Ethernet?s performance in distributed real-time applications such as storage area networks. Competitive approaches, such as Fibre Channel and proprietary links, may be a better choice. To find out, look into actual performance and system issues.
Many real-time embedded systems follow a distributed model, using several nodes that each handle part of the system task. A data-gathering system, for instance, may have a group of data-collection nodes reporting to a single data-storage node that controls the system. A storage area network has a single control and interface node that feeds data to several storage nodes. What is common to such systems is their need for nodes to communicate with one another, passing control signals and data.
In selecting a communications channel, bandwidth is often the designer?s major concern. That?s particularly true in data-intensive systems, like storage networks. Several high-bandwidth channels, both standards-based and proprietary, are available. Ethernet and Fibre Channel head the standards-based list, with InfiniBand trying to join in. A number of proprietary channels are also available. For example, SBS Technologies makes a high-bandwidth channel called DataBLIZZARD; Sky Computers? SKYchannel, a basic packet-based architecture; and Creative Electronic Systems? BP-Net, an interprocess communications system.
When raw bandwidth is the prime factor in an application, comparisons among the alternatives seem easy to make. Ethernet offers 10- and 100-Mbit/s, and 1 Gbit/sec data rates now, with 10 Gbit/sec in the future. Fibre Channel has 1- and 2-Gbit/s bandwidths, with 4 Gb/s coming. However, 10-Gbit Ethernet, though, looks like the highest-bandwidth option.
When 1 Gb/s is fast enough for an application, though, all of the choices seem comparable. At this point in the analysis, most designers would then look to other considerations, such as cost and availability, where standards-based channels seem to have the edge. InfiniBand is an unproven technology, but both Ethernet and Fibre Channel have multiple vendors offering compatible products. That, in turn, means lower cost due to price competition, as well as freedom in selecting a vendor. In addition, tools and utility software are widely available and well tested.
Proprietary channels are typically sole-source alternatives, which designers traditionally view as more expensive and less well supported. Yet a proprietary channel may have some of the cost and support advantages of a standards-based channel if it employs standards-based components for part of its design. Many do.
Designers also tend to believe that sole-source alternatives eliminate freedom of choice in purchasing. In practice, however, companies often stick to a preferred vendor even for standards-based products. They realize that the benefits of working with a stable hardware vendor can often outweigh the benefits of price competition.
It?s clear, then, that choosing among the alternatives requires looking beneath the surface comparisons. The problem with direct bandwidth comparisons, for example, is that they don?t offer insight into how a channel will perform in actual operation. Communications between embedded nodes needs more than data passing. During initialization they need to tell each other how they are set up. They will also frequently need to transfer short control messages for the synchronization of software processes. Handshaking and other communications protocols further contribute to the traffic on the channel, all reducing the achievable bandwidth for data.
To evaluate channel performance effectively, several factors must be considered. One is the channel?s protocol, which has a strong effect on a channel?s sustainable data rate. The popular TCP/IP, often paired with Ethernet, can eat up as much as 70% of a channel?s bandwidth with protocol overhead. Fibre Channel?s remote DMA (RDMA) protocol, on the other hand, leaves 90% of a channel?s capacity available for data.
Another factor that affects a channel?s performance is the overhead that the protocol imposes on the rest of the embedded software: waiting for the embedded software on both sides of the link leaves the channel idle, further reducing the sustainable data rate. Compare, for instance, a block data transfer using a variable-buffer sockets-based protocol with the RDMA protocol. To send data using sockets, the sending node must first tell the receiving node the size of the data block that?s coming. The receiving node must then allocate the buffer and send a signal indicating its readiness to receive data. When the transfer is complete, there are more acknowledgements to be sent. The RDMA protocol, however, has the sending node target a pre-defined memory block. Data transfer begins without preliminaries, requiring only a message telling the receiving node when the transfer is complete and how large it was.
Along with protocol, factors such as packet size can strongly affect a channel?s data bandwidth. The traditional wisdom is that bigger is better, but that may not always be true. Increasing packet size for TCP/IP data transfers quickly plateaus with 100-Mb/s Ethernet. For the Gigabit Ethernet, increasing packet size too far actually reduces the effective data bandwidth (see Fig 1). The same holds true when using TCP/IP on 2-Gb/s Fibre Channel don?t exhibit the same problem (see Fig. 2): the effective data bandwidth continues to increase with packet size for RDMA and SCSI protocols. Proprietary channels often use protocols optimized for the application, resulting in even better performance with large packets.
Designers tend to overestimate the value of large packet sizes for data, though. To fully understand a channel?s effective bandwidth, designers must know the sizes of both large-packet data transfers and small-packet control transfers, because differences in handling small packets can be significant (see Figs. 1 and 2 again). These differences can quickly add up when an application uses numerous control signals. In general, the more tightly the system?s software processes are coupled, the more often they will be sending control messages. To get an accurate picture of the data rate a channel will provide, designers need to combine the time spent in control transfers with the time spent sending data.
Communications latency is also a factor in how a real-time system performs. No real-time designer would think of picking an operating system without looking at interrupt latency. Yet with communications, however, most designers only know that smaller is better.
Communications latency is the length of time it takes for a control packet from one process to reach a remote process. It includes delays due to protocol formatting as well as delays resulting from traffic on the link. Here the various alternatives vary widely.
Ethernet, for instance, is not deterministic. It can drop or reroute small packets such as command data without regard to the timing of the transmission, making latency vary widely. In addition, it doesn?t handle small packets well. Ethernet controllers tend to bundle small packets with larger ones for the same destination, further compromising channel latency.
Fibre Channel doesn?t handle small packets well either, but whether or not it?s deterministic depends on the protocol used with it. Its latency is deterministic and a modest 52 µsec when using the RDMA protocol, but it?s not deterministic with the SCSI or TCP/IP protocols.
Proprietary channels, in contrast, are often designed to both be deterministic and have superior performance.
Designers also need to account for the cost and complexity of system nodes and their software. Some software is always needed, regardless of the communications channel. Here again, though, the channels have radically different needs.
Ethernet communications links require a relatively smart node because it uses a send and receive protocol that requires the two nodes to be in sync. In other words, if one node is sending but the other isn't listening, data will be lost. Incorporating TCP/IP within the Ethernet data stream further adds to the node?s intelligence requirements, as further interpretation is needed to decipher the data. A protocol such as RDMA, which runs on Fibre Channel but is not available on Ethernet, needs no interpretation: The initialization of the data link includes a destination in memory for RDMA transfers. Once the link is set up, the channel controller can transfer incoming data to memory without further interpretation or processor intervention. The receiving node is therefore less complex and less expensive to create.
In fact, the availability of a specific protocol on a communications channel can simplify the whole system design considerably. A network storage application, for example, can benefit from Fibre Channel?s ability to handle the SCSI protocol in its data stream. The nodes in the network handle data flow tasks such as file addressing and error checking, and none of the protocols available offer any particular advantage for these tasks. Ultimately, however, the storage node?s controller must transfer data to and from a disk drive.
Here is where protocol makes a difference. Disk drives available today use either the IDE interface or a SCSI interface. For most channel protocols, the node controller must reformat data as it passes to or from the node. In most cases, reformatting requires a fair amount of processing power. Using the SCSI protocol over Fibre Channel, however, streamlines the data transfers: the data stream is already in the format a SCSI disk drive requires, allowing a direct transfer of data to and from the drive without the overhead of reformatting.
Note, though, that proprietary channels can work even better in some applications. Free of the compromises a general-purpose standard imposes, a proprietary channel will often target a specific class of application. With a well-bounded problem to solve, the channel's designers can obtain top speed, lowest latency, and minimal complexity.
Figure 1 ? Fixed packet overhead usually means that link bandwidth will improve with increasing packet size. Yet, 10- and 100-Mb/s Ethernet bandwidths plateau with 64-kbit packets, and 1-Gb/s Ethernet degrades for packet sizes above 128 kbytes.
Figure 2 ? Running the SCSI protocol over Fibre Channel improves link bandwidth more than threshold over TCP/IP. Using RDMA instead can add another 10% to 30% for all but the largest packets.