January 2010 - Embedded Insider Newsletter

GE Intelligent Platforms: Embedded Insider eNewsletter
VOLUME 5, ISSUE 1 JANUARY 2010
 
IN THIS ISSUE
Click here 96 Processors on One
Click here Order for AED Program
Click here Reflective Memory Networks
arrow Company News
arrow White Paper
Click here to find out what's new with GE Intelligent Platforms.
spacer
 
BARRY DERRICK , PRODUCT MARKETING spacer spacer

The real attraction of GPGPU technology is the fact that sophisticated, very high performance applications can be deployed in a fraction of the platform size and weight, and with substantially less power consumption and heat dissipation than would be required for a "traditional" solution with the same compute capability. It is not unreasonable to believe that this reduction in SWaP could be by a factor of ten – and can translate into, for example, unmanned vehicles with greater range, a larger payload, and increased mission duration. Best of all, the performance improvements of GPGPU are accessible to almost everyone because of the NVIDIA CUDA™ architecture, which allows you to program in C. CUDA is described by NVIDIA as a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA GPUs, and includes the CUDA Instruction Set Architecture (ISA). Over 100 million CUDA-enabled GPUs have been sold to date, and thousands of software developers are already using the free CUDA software development tools.

arrow Read the Full Article

NVIDIAThanks to our partnership with NVIDIA®, you can now take advantage of massively parallel GPGPU processing in rugged applications such as radar, sonar, image processing and software defined radio. General Purpose processing on Graphics Processing Units (GPGPU) offers potentially huge performance improvements for these types of applications.

GPGPU is ideal for applications like sensor processing, graphics and image processing, which require substantial amounts of data to be processed (or smaller amounts of data to be repeatedly processed), and where that data can be processed simultaneously in parallel, rather than sequentially.

     
  arrow
 VPX
General Dynamics
Order for AED Program
GE Fanuc Intelligent Platforms has secured an order from General Dynamics Land Systems for 3U VPX single board computers, graphics processors, disk subsystems and switches in support of GDLS’s work on the Abrams Evolutionary Design (AED) program for the M1A2 tank. Importantly, all these GE Fanuc products comply with the REDI (Ruggedized Enhanced Design Implementation) VITA 48 standard.
spacer
  arrow
 REFLECTIVE MEMORY
Reflective Memory Networks
Determinism, Simplicity and Sheer Performance
Reflective Memory networks provide the highly deterministic, tightly timed performance necessary for a variety of distributed simulation and industrial control applications. These solutions cater to applications where determinism, implementation simplicity, integration of dissimilar hardware platforms running different operating systems, and a lack of software overhead are key factors.
spacer
  arrow
 COMPANY NEWS
Welcome to GE Intelligent Platforms
On December 11 2009, the agreement to dissolve the GE Fanuc Automation joint venture was finalized, and we are now known as GE Intelligent Platforms. The joint venture was formed in another era to help two companies globalize and cooperate on PLC and CNC technology, which at the time were primarily focused only on discrete automation. The JV was very successful in this mission.
arrow spacer Learn More
 
arrow spacer Learn More
 
arrow spacer Read the Full Story
 
Upcoming Events, Seminars Workshops...click here. spacer CALENDAR

Upcoming Events, Seminars, Workshops

Where in the world is GE Intelligent Platforms? Just take a look you might be surprised to find that we will be in your neck of the woods in the very near future. Come find out what we’re up to at these upcoming events.
     
Date Event Location
January 26 RTECC Santa Clara, CA
February 15-17 Defense Expo New Delhi, India
April 6-8 SPIE Orlando, FL
May 3-6 OTC Houston, TX
 
spacer
spacer
spacer
WHITE PAPER

Maximizing Memory Resources on Xeon 5500-based ATCA blades

Space for DIMMs is limited on dual Xeon® AdvancedTCA® blades. As a result, at the upper limits of memory capacity, certain tradeoffs must be made between physical memory, active memory pages, and memory bus speeds. This paper offers a brief description of the relative importance of these variables as they affect memory subsystem performance and application software.

arrow Download the White Paper
spacer