The Raspberry Pi AI HAT+, an advanced hardware attached on top model, designed to support real-time machine learning and intensive AI processing to cater needs of users.
The AI HAT+, an AI based latest solution innovated by the Raspberry Pi integrates the highly efficient Hailo AI accelerator technology, offering a 13 TOPS (tera-operations per second) model with the Hailo-8L (H-8L) accelerator, and a 26 TOPS model featuring the Hailo-8 (H-8) accelerator. With this flexibility, developers and hobbyists can choose the processing power that best suits their AI applications, from straightforward neural network acceleration to multi-network tasks.Â
Compatible with the HAT+ specification, the advanced model automatically engages PCIe Gen 3.0 mode to release the full potential of the H-8’s 26 TOPS capacity, significantly enhancing compute power for complex inferencing. Unlike earlier models that relied on M.2 connectors, the new model integrates the accelerator directly onto the main PCB, simplifying assembly and offering improved heat dissipation, making it well-suited for high-demand applications.
The increased TOPS in the 26 TOPS model allows more intricate AI processes, enabling users to deploy sophisticated neural networks with high real-time performance. Tasks like object detection, pose estimation, and subject segmentation can now run concurrently on a live camera feed, showcasing the model’s potential in high-throughput applications. For users working on more moderate tasks, the 13 TOPS model remains an effective option for deploying AI models at a lower price point.
Maintaining backward compatibility with the AI Kit, the new model seamlessly integrates with existing accelerator setups, supporting models compiled for H-8L on the H-8, ensuring versatility for various AI applications on Raspberry Pi. While models tailored to the H-8 may require slight adjustments to run on the H-8L, alternatives are generally accessible for flexible AI workload management.
With the latest model, Raspberry Pi continues to establish itself as a worthy platform for embedded AI applications, enabling users to explore a wide range of inferencing tasks across both casual and intensive AI projects.