Do self-driving cars need to cost so much?

by John Naughton

The Observer

“The best is the enemy of the good,” said the 18th-century French writer Voltaire. It’s a maxim that has a particular resonance for tech designers, because it highlights the intrinsic tension between ambition and pragmatism that haunts them. Many perfectly viable products have never made it beyond the prototype stage because their designers felt they fell too far short of the ideals they had set for themselves. One of the reasons why Steve Jobs was so remarkable as a company boss is that he was the exception that proved Voltaire’s rule. He was a perfectionist for whom the good was the enemy of the best. Which is why working for him was such an exhausting business and also why Apple’s products became so distinctive.

As it happens, Voltaire’s maxim may also be useful in explaining what will happen in the field of autonomous vehicles, aka self-driving cars. The idea of such cars has been a hot topic in some circles since the 1980s, but was given a huge boost in October 2010 when Google announced that it had built an autonomous vehicle. It was a typical Google project: impressively ambitious and involving the application of oodles of money and formidable engineering talent to the task of creating a vehicle that could safely navigate crowded urban roads.

The Google car did indeed work as advertised. Google cars have now clocked up 700,000 km of safe driving on roads in the United States. They do it by having massive amounts of information technology on board. Each car hoovers up — and processes — nearly 1 gigabyte of data every second. The Google folks wax lyrical about the bright future for autonomous vehicles: safer roads, more efficient traffic flows, fewer accidents and casualties, mobility for people who are currently too infirm to drive, and so on.

There are two flies in this ointment. The first is that Google’s cheery technological determinism may be obstructed by human cussedness, as manifested in a refusal to trust the technology. The second is that the technology needed to make a Google car driverless is fantastically expensive, in the region of $150,000 per vehicle, of which $70,000 goes just on the laser rangefinder.

Those costs will come down in due course, but the consensus in the automobile industry is that they won’t come down far enough to make the Google system a mass-market proposition. So all over the place there are outfits working on less exotic but much cheaper approaches.

At Oxford University, for example, researchers have developed a self-driving car that can cope with weather conditions undreamt of in California. It works by recognizing where it is, based on a laser scanner on the front of the car, and comparing its surroundings with its stored data, which is very different from Google’s system, which uses a combination of GPS, laser rangefinding and mapping to determine its location and route. It’s claimed that the Oxford system could be retrofitted to existing cars and could “one day cost just £100 (around ¥15,000).”

In Israel, a technology company is combining cheap video cameras with computer-vision algorithms to enable cars to become driverless in certain conditions, for example, on motorways. Last week, a slightly terrified John Markoff of the New York Times was persuaded to sit in the driver’s seat of an Audi A7 while software connected to a video camera on the windscreen drove the car at speeds up to 100 kph.

There’s lots more where that came from. For example, Volvo, Mercedes and Lexus have “driver-alert” systems that can detect when a driver is getting sleepy. And you can have automatic parking systems fitted as optional extras to even humdrum cars.

So, at one end of the spectrum, we have the “best” — Google’s autonomous vehicle — and at the other end the merely “good” — the humdrum technologies that are akin to autopilots in aircraft: things that make driving easier and perhaps safer, but which are always optional. Which will win out? My guess is that in this case Voltaire was wrong.