Sensor fusion in action: How cameras and LiDAR integrate with radar for safer driving 

By Yuichi Motohashi. Dep. Director / Global Segment Lead, Automotive Display, Camera, LiDAR & SerDes, GlobalFoundries  

Sense – analyze – act. This is the principle that advanced driver assistance systems (ADAS) operate on. Modern vehicles rely on a network of sensors to build a more precise, reliable perception of their surroundings. Sensor fusion combines these inputs – from radar, camera, LiDAR, and ultrasound – with artificial intelligence and deep learning to deliver the environmental acuity required for vehicles to make split-second decisions. 

Since 1999, when Mercedes-Benz “taught the car to see,” radar has been a proven cornerstone of ADAS. However, camera and LiDAR technologies are rapidly advancing, adding new levels of detail and depth to a vehicle’s perception. LiDAR in particular has long been stuck in the space between functional solutions and scalable manufacturing. GF is closing that gap, using FinFET, advanced packaging and photonics to unlock the path to mass-market viability. 

Together, complementary sensors provide high-resolution imagery, 3D mapping and object classification capabilities – each essential for the safer driving of today, and the fully autonomous mobility of tomorrow. 

Cameras: Sharpening your car’s view of the world 

Cameras capture high-quality images around cars to detect lane markings, speed limits, turn signals, pedestrians and more. Sophisticated algorithms analyze images taken by cameras to determine the distance, size, and speed of objects, enabling the system to react appropriately. 

Automotive cameras do not utilize ultra-high megapixel counts like mobile phones because additional pixels result in increased data for the vehicle’s computer system to process. Producing extremely high-resolution images would significantly expand the volume of data transmitted to the central processor, potentially exceeding the capabilities of System on Chips (SoCs) that must analyze this information instantaneously to ensure safety. Excessive data could hinder processing speeds or overwhelm the system. Consequently, it is essential to carefully balance detection distance with the processing power required by the central SoC. 

The primary image quality Key Performance Indicator (KPI) is dynamic range, which is vital for maintaining accuracy in difficult lighting and weather conditions—ranging from intense sunlight at dusk to darkness, heavy rainfall, or fog. Achieving such high dynamic range imaging necessitates increasingly sophisticated Read-out ICs (ROIC) within automotive stacked CMOS Image Sensors (CIS). There exists a direct relationship between system-level, circuit-level and transistor-level requirements for high-performance automotive CIS ROIC. 

System-level  

  • Enhanced resolution (from 8MP to 12–16MP), frame rate (≥30fps), and dynamic range (≥130dB) are necessary, collectively increasing the processing load on the ROIC. 
  • Transmission bandwidth of at least 6Gbps is essential, underscoring the need for SerDes integration. 
  • Long‑range detection depends on high pixel resolution, high-speed operation and minimal read noise (including 1/f and RTS noise). 
  • Improved low‑light performance requires minimizing both ADC and transistor noise. 

Circuit-level  

  • To accommodate high bandwidth, circuits must achieve elevated clock speeds, low jitter and reduced noise. 
  • Die size limitations call for high capacitor density, robust transconductance (gm) and efficient logic cell area usage. 
  • Reliable functionality at temperatures up to 125°C demands low leakage characteristics. 

Transistor-level  

  • High-speed operation mandates transistors with superior Ft/Fmax and low-noise characteristics. 
  • Consistent performance at elevated temperatures relies on effective leakage control and optimized transistor density. 

Images captured by vehicular cameras underpin many Advanced Driver-Assistance Systems (ADAS) features, such as lane departure warnings, collision avoidance and parking assistance, making them integral to contemporary automotive safety solutions. GF’s advanced technology platform continues to facilitate the development of state-of-the-art automotive CIS solutions. 

LiDAR: Mapping the roads in 3D 

If cameras are the car’s eyes, LiDAR adds depth perception. Instead of 2D images, LiDAR emits laser pulses and measures their return to generate a 3D point cloud of the surroundings. 

By doing this, LiDAR generates a detailed 3D map of the world around your vehicle. This is ultimately how the car knows the difference between a pedestrian, a bicyclist, an animal, another car or a garbage can. Take Aurora, the driverless commercial self-driving truck service. Its long-range lidar detects objects in the dark of night over 450 meters away, even identifying objects as quickly as 11-seconds sooner than a traditional driver would.  

This precise 3D vision powers today’s ADAS features, like lane keeping, pedestrian detection and adaptive cruise control, and is laying the foundation for full self-driving functionality in the future. 

Key figures of merit for automotive LiDAR systems 

  • Detection range and accuracy 
  • Long‑range LiDAR must exceed 300 m detection distance. 
  • Field of View (FoV) 
  • Short‑range LiDAR  Horizontal FoV target ~150° 
  • Vertical FoV: 20–30°  
  • Angular resolution: 
  • Long‑range: 0.1–0.15° 
  • Short‑range: 0.6° 
  • Distance resolution/ranging accuracy 
  • Target improvement to around 5 cm accuracy 
  • Frame rate 
  • Increased target: 30 fps 
  • Point rate 
  • dToF: Increase to ~10 M pts/sec 
  • FMCW: Expected ~2 M pts/sec 
  • Power consumption 
  • System-level power target: << 20 W 

How GlobalFoundries powers smarter sensors 

GF is at the forefront of advancing both camera and LiDAR technologies, delivering solutions that improve performance, integration, and efficiency. 

For camera, the image sensor is the core component that determines the performance of automotive cameras. GlobalFoundries delivers advanced Readout IC (ROIC) solutions for stacked CMOS Image Sensors (CIS), utilizing industry-leading 40nm and 22nm process nodes to meet the demanding requirements of next-generation automotive applications. 40nm and 22nm platforms provide low-noise performance for analog circuits and low power consumption even under extreme automotive high temperatures. In particular, 40nm-equipped image sensor has great image quality and high reliability, while 22nm based platform also offers outstanding signal processing capabilities, low-power operations. Some of the benefits are: 

  • Higher resolution and improved dynamic range: GF’s solutions enable image sensors to capture higher resolution images with higher dynamic range, by enabling faster, low noise A/D conversion with lower power consumption 
  • System integration: Integrating essential components like memory, ISP (Image signal processor), analog and high-speed interface onto a single chip simplifies the complexity of ADAS. 

With cameras generating and processing high volumes of data, Serializer/Deserializer technology converts data into a fast, streamlined stream, sends it over just single wires, and then converts it back for processing. GF is playing an active role in the OpenGMSL alliance and supporting SerDes-integrated smart sensors. 

For LiDAR, GF’s silicon photonics on the 45SPCLO platform could integrate laser source, light emitter, receiver and signal processing on a single chip, reducing LiDAR size and making it easier to fit into vehicles. Working with both O-band and C-brand wavelengths, the platform also uses a special silicon nitride (SiN) waveguide to achieve best-in-class propagation loss properties. 

In addition, GF’s HP silicon germanium (SiGe) is the gold standard for image quality in high-performance LiDARs, and offers unparalleled response times for transimpedance amplifiers to process signals and detect objects faster. 

Advantages include: 

  • Miniaturization: Integrating multiple optical components onto one chip results in more cost-efficient, compact LiDAR systems. Developing highly integrated, true solid-state FMCW LiDAR results in lower manufacturing costs, making LiDAR more accessible. 
  • Electronics integration: Combining SiPh with CMOS electronics enables enhanced signal processing for smarter, more capable sensors. 

The rise of cameras and LiDAR to steer the future of autonomous driving 

Radar, cameras and LiDAR each shine on their own, but they need to work in concert when it comes to making cars smarter and safer. GF’s technology sits at the heart of fusing these sensors, helping cars on the road to see farther, react quicker and make smarter decisions in the blink of an eye.  

While cameras and LiDARs are more emerging technologies in the automotive industry, there’s massive potential to advance their performance and integration. GF is empowering automakers to accelerate the deployment of safer, smarter and more autonomous vehicles. 

Author bio  

Yuichi Motohashi is the Deputy Director of End Markets at GlobalFoundries, responsible for leading the global segment in automotive cameras, LiDAR, SerDes and displays, which facilitate next-generation ADAS, autonomous driving and enhanced in-cabin experiences.