For a better experience on dSPACE.com, enable JavaScript in your browser. Thank you!

Sensor Simulation

Testing Camera-Based Systems

Testing camera-based advanced driver assistance systems (ADAS) usually requires a simulation box with a camera ECU, which films the animated scenes rendered in MotionDesk. However, with MotionDesk's sensor simulation feature, the test setup is different. Optical devices such as lens and sensor as well as the optical path are part of the simulation in MotionDesk. A cinema-like setup with a monitor and a lens system between monitor and camera ECU is no longer necessary for the test setup. The video data rendered in MotionDesk is inserted into a digital interface between the imager and the microcontroller of the camera ECU. To feed the camera ECU with MotionDesk data using optical path simulation, a physical interface is required.

To achieve a realistic simulation of the camera sensor and the optical path, the following visual effects that generally occur in camera-based systems can be simulated directly in MotionDesk:

  • Chromatic aberration (the lens fails to focus all colors to the same point resulting in colored edges around objects)
  • Vignetting (darker image corners compared to the image center)
  • Lens distortion (image deformation caused by the optical design of the lens)

Typical applications include:

  • Lane departure warning systems
  • Lane keeping assistant
  • Traffic sign recognition
  • Autonomous emergency braking

Developing Radar and Lidar Sensors

In the context of ADAS and autonomous driving, radar and lidar technology are gaining more and more importance – it is assumed that only the combination of these technologies with camera-based systems can provide ADAS and autonomous vehicles with the required reliability. When equipped with radar and lidar sensors, the vehicle receives information on distances, speeds, etc., which is crucial for detecting the vehicle’s environment reliably.

MotionDesk's ideal point-cloud sensor models form the basis for the development of radar and lidar sensors. When using these models, reflection points of the traffic environment are shown in the vehicle’s field of view while it is driving through a scene in MotionDesk. Different values, such as distance, azimuth/elevation angle, and relative speed, can be assigned to each reflection point. The vehicle’s field of view is freely configurable up to 180°, the resolution can also be adapted as needed. In the future, radar and lidar sensors will be implemented by raytracing-based simulation to achieve a higher coverage of development and test use cases.

A vehicle with ideal point cloud drives through a MotionDesk scene. Different values, such as distance, azimuth/elevation angle, and relative speed, can be assigned to each reflection point.