Autonomous vehicles rely on sensors to perceive the world around them. The three main sensor types—cameras, lidar, and radar—each have distinct strengths and weaknesses. Understanding these differences explains why most autonomous vehicles use multiple sensor types and why there's ongoing debate about which sensors are truly necessary.

Overview of Each Sensor

Cameras capture visual images similar to human vision. They provide rich color and texture information, enabling recognition of signs, signals, lane markings, and object types. Modern autonomous vehicles use multiple cameras to provide 360-degree coverage.

Lidar (Light Detection and Ranging) uses laser pulses to create 3D maps of the environment. It provides precise distance measurements and detailed geometry of objects and surroundings. Lidar "sees" in 3D, directly measuring the shape and position of everything around the vehicle.

Radar (Radio Detection and Ranging) uses radio waves to detect objects and measure their distance and velocity. It's been used in vehicles for decades for adaptive cruise control and collision warning systems.

Capability Camera Lidar Radar
Distance Measurement Indirect (computed) Excellent (direct) Good (direct)
Object Classification Excellent Good Limited
Weather Performance Poor in rain/fog Moderate Excellent
Night Performance Poor (without lights) Excellent Excellent
Cost Low High (declining) Low
Resolution Very High High Low

Camera Strengths and Weaknesses

Cameras excel at semantic understanding—recognizing what things are. They can read text on signs, distinguish traffic light colors, identify lane markings, and classify objects as cars, trucks, pedestrians, or cyclists. This rich semantic information is difficult to obtain from other sensors.

Cameras also provide high resolution at low cost. A modern camera sensor captures millions of pixels of information, enabling detection of small or distant objects. Camera hardware is inexpensive and can be integrated into vehicle designs without bulky external equipment.

However, cameras struggle with distance measurement. Unlike lidar and radar, cameras don't directly measure distance—it must be computed from image analysis or stereo camera pairs. This computed distance is less accurate than direct measurement, especially at longer ranges.

Cameras are also sensitive to lighting conditions. They struggle in low light, can be blinded by direct sunlight, and perform poorly in rain, snow, and fog. These limitations mean cameras alone may not provide reliable perception in all conditions.

Camera sensor

Cameras provide rich visual information but struggle with distance measurement and adverse conditions.

Lidar Strengths and Weaknesses

Lidar's primary strength is precise 3D geometry. It directly measures the distance to every point it scans, creating detailed 3D maps of the environment. This precision enables accurate detection of object boundaries and positions, essential for safe navigation.

Lidar works regardless of lighting conditions. It's equally effective in daylight and complete darkness, since it provides its own illumination through laser pulses. This makes lidar valuable for nighttime operation.

However, lidar is expensive. While costs have dropped dramatically—from over $75,000 per unit a decade ago to under $1,000 for some units today—lidar remains more expensive than cameras or radar. This cost affects vehicle pricing and deployment economics.

Lidar performance degrades in adverse weather. Rain, snow, and fog scatter laser pulses, reducing range and accuracy. Heavy precipitation can significantly impair lidar perception, though the impact varies by lidar type and severity of conditions.

Lidar also doesn't capture color or texture information. It sees geometry but not appearance. A red stop sign and a blank red square look the same to lidar. This limitation means lidar typically needs to be combined with cameras for complete perception.

Radar Strengths and Weaknesses

Radar's standout strength is weather resilience. Radio waves pass through rain, snow, and fog with minimal degradation. Radar provides reliable detection in conditions that impair both cameras and lidar, making it essential for all-weather operation.

Radar directly measures velocity using the Doppler effect. This velocity measurement is valuable for tracking moving objects and predicting their future positions. Knowing not just where an object is but how fast it's moving improves prediction and planning.

Radar is also inexpensive and mature technology. Automotive radar has been produced at scale for decades, driving down costs and improving reliability. Radar units are small and can be integrated into vehicle designs without visible external hardware.

However, radar has limited resolution. Traditional radar can detect that something is present but provides less detail about what it is or its exact shape. This makes radar less useful for object classification and precise boundary detection.

Radar can also produce false positives from metal objects like manhole covers, guardrails, and bridges. These reflective surfaces can create radar returns that might be mistaken for obstacles. Filtering these false positives requires sophisticated processing.

Radar excels in adverse weather but has limited resolution compared to cameras and lidar.

The Sensor Fusion Approach

Most autonomous vehicle developers use all three sensor types together, combining their data through sensor fusion. This approach leverages each sensor's strengths while compensating for its weaknesses.

Cameras provide semantic understanding—what objects are. Lidar provides precise geometry—where objects are. Radar provides reliable detection in all conditions and velocity measurement. Together, they create a more complete and reliable picture than any single sensor.

Sensor fusion also provides redundancy. If one sensor fails or is degraded by conditions, others can maintain perception. A camera blinded by sun glare might miss an obstacle that lidar detects. Radar might track a vehicle through heavy rain that impairs other sensors.

The Camera-Only Debate

Tesla has notably pursued a camera-only approach, arguing that if humans can drive with vision alone, so can AI. This approach has cost advantages—cameras are inexpensive and don't require bulky external hardware. It also forces development of more capable vision AI.

Critics argue that camera-only systems lack the redundancy and reliability of multi-sensor systems. Cameras' sensitivity to lighting and weather creates risks that lidar and radar could mitigate. The safety implications of relying on a single sensor modality remain debated.

The right answer may depend on the application and acceptable risk levels. Consumer vehicles with human backup may tolerate different sensor configurations than robotaxis operating without human oversight. As AI vision capabilities improve, the calculus may shift further toward camera-centric approaches.