Tuesday, November 5, 2024

Deep Learning Platform For Smarter AI Inferencing At The Edge

- Advertisement -

Compact, high performing GPU-enabled deep learning acceleration platform for deploying AI at the edge across industrial applications

ADLINK Technology has launched one of the most compact GPU-enabled deep learning acceleration platforms, the DLAP x86 series, which targets the deployment of deep learning in volume, at the edge where data is generated and actions are taken. It is optimised to deliver AI performance in various industrial applications by accelerating compute-intensive, memory-hungry AI inferencing and learning tasks.

The DLAP x86 series features:

- Advertisement -
  • Heterogeneous architecture for high performance – featuring Intel processors and NVIDIA Turing GPU architecture delivering higher GPU-accelerated computation than others and returning optimized performance per watt and dollar.
  • The DLAP x86 series’ compact size starts at 3.2 litres; it is optimal within mobility devices or instruments where physical space is limited, such as mobile medical imaging equipment.
  • With a rugged design for reliability, the DLAP x86 series can sustain temperatures up to 50 degrees Celsius/240 watts of heat dissipation, strong vibration (up to 2 Grms) and shock protection (up to 30 Grms), for reliability in industrial, manufacturing and healthcare environments.

Delivering an optimal mix of SWaP and AI performance in edge AI applications, the DLAP x86 helps transform operations in healthcare, manufacturing, transportation and other sectors. Examples of use include:

  • Mobile medical imaging equipment: C-arm, endoscopy systems, surgical navigation systems.
  • Manufacturing operations: object recognition, robotic pick and place, quality inspection.
  • Edge AI servers for knowledge transfer: combining pre-trained AI models with local data sets.

“Large multilayered networks? Complex datasets? The DLAP x86 series’ flexibility provides deep learning. Architects can choose the optimal combination of CPU and GPU processors based on the demands of an application’s neural networks and AI inferencing speed, yielding a high performance per dollar,” said Zane Tsai, Director of ADLINK’s Embedded Platforms & Modules Product Center.


SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics