Share
Mil/Aero Insider: June 2011
GE & Serial Switched Fabrics
Peter Cavill
Product Manager
GE Intelligent Platforms
With the creation of the VPX – and subsequently, OpenVPX – standards, serial switched fabrics suddenly found themselves center-stage in embedded computing systems for military and aerospace applications. Designed in response to the rapidly accelerating need to move increasing amounts of data at ever-faster speeds, serial switched fabrics enable the inherent performance limitations of bus-based architectures such as VME and CompactPCI to be overcome.
Within OpenVPX there are four ‘planes’ defined which help to clarify how the fabrics are intended to be used – Data Plane, Expansion Plane, Control Plane and Management Plane. The Data Plane is the plane where bulk data is expected to be transferred between peers and via switches. The Expansion Plane, typically implemented via PCI Express, is intended for connections between host and expansion boards - where peripheral or end-point devices are located. The Control Plane is for lower data rate house-keeping type communications implemented over Gigabit Ethernet and the Management Plane is implemented by a low-speed management bus such as IPMI.
The Data Plane is where the biggest fabric decisions have to be made, and today, there are four widely-implemented serial switched fabrics : PCI Express, 10 Gigabit Ethernet, InfiniBand and Serial RapidIO. Superficially, each of these fabrics might appear to be in competition with the others – but the fact is that, while they share multiple similarities, they also have individual strengths and weaknesses that make them complementary to, rather than competitive with, each other. Those strengths and weaknesses are not just technical: they are also, inevitably, about the market. Equally inevitably, those strengths and weaknesses translate into their appropriateness for different applications.
The implication of this is that embedded computing manufacturers like GE are supporting not just one or two serial fabrics, but all four of them – but with the emphasis on choosing the most appropriate fabric for the anticipated application.
So what are the factors that guide that choice?
In many embedded military and aerospace applications, size is important. The physical environment in which a subsystem will be installed will dictate many subsequent choices. For many deployments today, space and weight are constrained – pointing to a 3U solution.
The challenge for board manufacturers is how to optimize use of the 3U form factor’s limited board real estate – how to deliver the most performance, the most functionality, the most flexibility by making smart silicon choices. Beyond this: by definition, the available pin count on a 3U VPX single board computer is limited by comparison with the 6U alternative – so each pin becomes a precious commodity.
These considerations lead inevitably to a choice of PCI Express as the fabric of choice for 3U boards. The PCI Express protocol is supported natively by processors from both Intel and Freescale, eliminating the need to provide any form of conversion/transition in additional silicon. PCI Express to the backplane needs to be provided anyway as a fundamental part of the board’s design (required for the Expansion Plane to allow communication with peripherals) – so, in effect, it comes ‘for free’. And using PCI Express as the switched fabric of choice for the Data Plane in 3U systems ensures that the maximum number of pins is left available for the customer’s application.
Next to Serial RapidIO and InfiniBand, PCI Express certainly has less ‘sex appeal’: it was, after all, originally designed as a simple point to point mechanism for communicating with high speed peripheral devices. It does, however, have some persuasive advantages.
First among these is that PCI Express is ubiquitous, with enormous support standing behind it. For many developers, it is familiar – and the hardware and software infrastructure that surrounds it make it a powerful and cost-effective force. Second is the weight of research and development behind it – research and development which has seen the introduction of third generation technology (PCI Express Gen 3) capable of 4x the bandwidth of first generation technology. It seems likely that, in bandwidth terms, PCI Express will stay ahead.
If PCI Express has a disadvantage, it is that it was not designed for the type of peer-to-peer applications for which other switched fabrics were designed. However, GE has been able to address this perceived shortcoming with the development of a unique capability that runs at initialization and allows PCI Express to facilitate peer-to-peer operations.
Where board real estate and pins are more plentiful – as in the case of the 6U form factor - different switched fabric choices can be made. Here, the guiding principle needs to be how to maximize performance by leveraging the architecture and functionality of the host processor. For example: the PowerPC architecture natively supports both PCI Express and Serial RapidIO. As such, it makes sense that 6U boards featuring processors from Freescale should support Serial RapidIO on the data plane. There is no requirement to provide any kind of bridging capability between processor and switched fabric with the cost, latency and additional power dissipation that bridging inevitably incurs.
The same thinking applies to 6U SBCs based on the Intel architecture. As well as support for PCI Express, GE's Intel-based 6U boards feature support for InfiniBand – a switched fabric technology in the development of which Intel had a leading a role. As with the PowerPC/Serial RapidIO combination, there is a natural synergy – which translates into better performance and lower cost – between Intel and InfiniBand.
That synergy is not just a technology synergy, however. The Intel/InfiniBand combination is dominant in server farms – installations which deliver massive computing power by coupling together multiple servers or blades. With the growing requirement in the military/aerospace market to gather, process and disseminate huge volumes of data, those server farms are increasingly being replicated in defense applications. For customers looking to create similar profiles, there are sound commercial as well as technical reasons why the Intel/InfiniBand combination is a compelling one.
Such customers are also looking at harnessing the considerable processing power of GPGPU technology – a field in which GE is an acknowledged leader, with a broad range of NVIDIA CUDA-enabled solutions. Systems for processing the volumes of data associated with, for example, radar are increasingly including clusters of GPGPUs, operating in parallel.
In this area, there has been an interesting development, with NVIDIA and Mellanox having worked together to develop an efficient mechanism which allows InfiniBand to deliver data directly to the GPGPU’s processor memory – rather than incurring the performance overhead of delivering the data first to the host processor’s memory before transferring it to the GPGPU, as would be the case with other switched fabrics. For GPGPU-based applications, this development makes Infiniband a compelling choice (see the GE white paper “Switched Fabrics Support High-Performance Embedded Computing for Military and Aerospace Platforms”).
For Intel-based 6U single board computers, GE also provides optional support for 10 Gigabit Ethernet on the Data Plane. Perhaps the least highly regarded of the four main switched fabrics, it does, however, have a significant advantage which, for many customers and applications, is a compelling one – and that is the ease with which it can be implemented.
Because Ethernet was originally designed for connecting boxes across a local area network, and has significant longevity in the market, it comes with a rich set of communications protocols. Its protocol stack is almost universally available, making the development of application software simple in the extreme.
Given the point-to-point nature of serial switched fabrics, switching is, of course, fundamentally important. In the VPX/OpenVPX world, provision is made for distributed switching – typically implemented on multiple single board computers – and centralized switching, implemented by a dedicated switch card. GE offers PCI Express centralised switching solutions in both 6U and 3U form factors (PEX440, PEX430 respectively) and a10GigE centralised switching solution on 6U (GBX460). Many of GE’s individual payload cards also offer distributed switching capabilities enabling the end user to define many different architectures which can optimize the data flow in his application.
GE's serial switched fabric strategy is a coherent and compelling one, based as it is not on an arbitrary choice to offer primary support for only one, but on an understanding that there is no single ‘best’ solution – no 'one size fits all'. Physically constrained environments, synergies with processing architectures, cost-effectiveness, minimizing SWaP, the need to maximize customer choice, commercial support, the specific requirements of individual applications, product roadmaps – all of these and more have helped shape GE’s position. It is a strategy that has developed in parallel with the development of the VPX/OpenVPX market, and that has consistently prioritized price/performance, functionality and flexibility. It will continue to develop in line with changing customer requirements.