Ouster's new color lidar doesn't eliminate sensor fusion. It moves it inside the box.
Ouster just announced color lidar that captures both visual and 3D depth data simultaneously from a single sensor. The CEO calls it “the holy grail of what a roboticist has always wanted.”
TechCrunch calls it a camera killer.
Cool technology. If it works at scale.
But first, some context. The autonomous vehicle world has been split into two camps for years. Tesla and Elon Musk believe cameras alone are sufficient. Vision only. No lidar needed. Waymo and much of the rest of the industry disagree, betting that depth data makes perception more reliable.
That debate just got more expensive to ignore. Mercedes-Benz appears to be limiting or excluding lidar from some upcoming mass-market vehicles, moving toward a camera-led architecture, supplemented by radar and ultrasonic sensors.
Ouster is betting the other way. And their new sensor does something important: it collapses two devices into one, cutting cost and eliminating calibration drift between separate sensors.
But here is what one device does not solve: what do you do when the visual data and the depth data disagree?
You haven’t eliminated the disagreement. The arbitration still happens in software. The box just got smaller and less expensive.
You remove calibration drift. You don’t remove disagreement. A camera sees a shadow and interprets it as an obstacle. A lidar sees through it. Or the reverse happens. Which one do you trust? How do you arbitrate? What happens in the edge case your system was never trained on?
The camera versus lidar debate has been going on for a decade. This sensor doesn’t end the debate. It makes one side cheaper. Meanwhile Mercedes just voted with its wallet for the other side.
Want longer reads on these topics?
Insights covers the same topics in depth: research-backed analysis on AI, value creation, and building companies.
Read Zaruko Insights