Manufacturers and automakers are racing to design self-driving cars, to reduce accidents and improve traffic flow. So, to achieve autonomous capability in vehicles the central processing system must perform the following functions.
LiDAR and optical cameras are both popular choices, each with its own advantages and disadvantages. Although LiDAR may be more expensive, interpretation of camera data in varying environmental conditions can be difficult. With multiple OEMs announcing LiDAR integration into mass-market vehicles, the automotive market adoption is set to accelerate.
A LiDAR sensor emits pulsed light waves into the surroundings. The pulses then bounce off objects and the surrounding environment and return to the sensor. The distance travelled is calculated by the sensor based on the time it takes for each pulse to return to the sensor. Finally, a processing platform collects this data millions of times per second to create a real-time, high-precision 3D map of the vehicle’s surroundings.
The design of a LiDAR system begins with identifying the smallest object that the system must detect and identifying the reflectivity of that object, with the resolution being a key system characteristic of LiDAR solutions. Creating real-time point clouds is one of the major challenges of a LiDAR system.
LiDAR has a high computing requirement for high data rate image processing of the 3D point cloud, consisting of over 1 million data points per second – significantly more than camera and RADAR sensors. The 3D Cloud point processing includes pre-processed data, segmentation of point cloud data, identification, and classification.
Key Challenges for a LiDAR Processing Platform
Addressing the Challenges
A modular embedded system architecture such as an FPGA and MPSoC (Multi-processor System-on-chip) based processing platforms provide the required scalability for designing LiDAR systems. LiDARs acquire low latency and high throughput capabilities via high-speed transceiver lanes and FPGAs available on the device. However, FPGAs are more commonly used for data capture, while GPUs are used to run algorithms.
iWave has built an extensive portfolio of System on Modules and Single Board Computers on FPGA and Adaptive SoC platforms that provide the necessary processing power and high-speed interfaces to support the module. Examples of such platforms include the
MPSoCs feature high-performance Quad Core ARM Cortex-A53 application processors along with programmable logic cells that allow for a high degree of scalability and flexibility in design. Such a system enables a designer to implement applications and communication frameworks in a vehicle using standard interfaces like CAN and Ethernet. Applications can be built using embedded Linux or any other operating system based on the requirements.
Real-time control for LiDAR systems is provided by the dual-core Arm Cortex-R5 real-time processor. These processes are also used to implement safety functionalities in a system that monitors single and latent faults. Dedicated DSP elements, configurable logic blocks within the programmable logic, and diverse storage arrangements on the SoM are used to achieve low latency.
Plots are visually represented using the display port interface, which outputs video frames from the processor system and programmable logic. Programmable I/Os are used to implement HDMI/DVI or other bespoke video output standards.
MPSoCs PS and PL support a variety of industry-standard interfaces such as CAN, SPI, I2C, UART, and GigE. The PL I/O flexibility allows for direct interfacing with MIPI, LVDS, and GigaBit Serial Links, allowing for higher levels of protocol implementation within the PL. By providing the correct PHY in the hardware design, the PL enables any interface to be implemented, providing any-to-any interfacing.
Image processing is critical in LiDAR applications for navigation and monitoring. Typically, the algorithms used in these systems are created and modeled in high-level frameworks such as OpenCV. Hence, these platforms are equipped with H.264/H.265 video codec units to support image processing.
Machine learning is a critical technology for developing automated applications to classify objects on the highway or observe and monitor occupants. Hence a developer can add AI stacks (like Vitis AI) to make these platforms AI-compatible.
As a result, you get a highly integrated platform with support for any-to-any interface for sensor interfacing, neural network accelerator, and safety processor enabling the development of a reliable, lower-weight, and more power-efficient solution.
Advantages of MPSoC-based Processing Boards
For more information, please contact us at mktg@iwavesystems.com or visit our website www.iwavesystems.com.
Keep Reading
Launch of Zynq UltraScale+ RFSoC SoM | OSM based System on Modules | Launch of Agilex-I and F Series SoM |