Share
Embedded Insider: January 2010 - Volume 5, Issue 1
96 Processors on one 3U VPX Board
Thanks to our partnership with NVIDIA®, you can now take advantage of massively parallel GPGPU processing in rugged applications such as radar, sonar, image processing and software defined radio. General Purpose processing on Graphics Processing Units (GPGPU) offers potentially huge performance improvements for these types of applications. GPGPU is ideal for applications like sensor processing, graphics and image processing, which require substantial amounts of data to be processed (or smaller amounts of data to be repeatedly processed), and where that data can be processed simultaneously in parallel, rather than sequentially. The real attraction of GPGPU technology is the fact that sophisticated, very high performance applications can be deployed in a fraction of the platform size and weight, and with substantially less power consumption and heat dissipation than would be required for a "traditional" solution with the same compute capability. It is not unreasonable to believe that this reduction in SWaP could be by a factor of ten – and can translate into, for example, unmanned vehicles with greater range, a larger payload, and increased mission duration. Best of all, the performance improvements of GPGPU are accessible to almost everyone because of the NVIDIA CUDA™ architecture, which allows you to program in C. CUDA is described by NVIDIA as a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA GPUs, and includes the CUDA Instruction Set Architecture (ISA). Over 100 million CUDA-enabled GPUs have been sold to date, and thousands of software developers are already using the free CUDA software development tools. In addition, further GPU capabilities will be unlocked with the OpenCL framework created by the Khronos Group, which is the first open standard for writing programs that execute across CPUs, GPUs and other types of processors. It includes a language for writing kernels (the functions that execute on OpenCL devices), defines APIs that are used to define and then control the platforms, and provides parallel computing using task-based and data-based parallelism. The first product to bring the benefits of CUDA-based computing to rugged embedded applications is the GE Intelligent Platforms 3U VPX GRA111 featuring NVIDIA's GT240 GPU. For more information on our growing family of CUDA-capable GPGPU boards, |