PROFESSOR WHITTAKER, WHAT’S THE STATE OF THE ART IN AUTONOMOUS VEHICLES?
The abstract concept of autonomous driving is well understood. We now are in the stage of fulfilment. The two DARPA Challenges [held in the 2004 and 2005 in the desert and in 2007 in a fake urban setting] were watershed events. They transformed the field.
WHAT CHANGED FUNDAMENTALLY BETWEEN 2004 AND 2007?
We went from believing in the potential of autonomous driving, but thinking it beyond reach, to seeing it as something that’s clearly attainable. Those two competitions inspired an entire community and enrolled it to work for the cause. That led to a huge, nonlinear leap of technology, whether you look at sensors or applications. It’s absolutely astonishing how far we have come in the past decade, in terms of processing power, sensor costs and infusing all these advances into commercial automobiles.
THE SEMINAL DARPA MOMENTS IN WHICH YOU PARTICIPATED AND WON WERE COMPETITIONS FOR WHICH SCIENTISTS PREPARED QUITE A LONG TIME. NOW WE HAVE NEW ENTRANTS LIKE GOOGLE PROCLAIMING WE’LL HAVE THE BLIND DRIVE SOON AUTONOMOUSLY. IS THAT WISHFUL THINKING OR A REALISTIC GOAL?
I see it as viable and inevitable, and here’s why. I actually held the first symposium on blind driving that took place before the very first Challenge. There are several land speed records for the blind that are hard to beat for a driver with good eye-sight, even though we are talking about controlled settings. And you already have driverless people movers at airports. In due time, vehicles driving the disabled will be very analogous to such public transport. I think it is a question of dignity for the disabled and the elderly. Also, keep in mind that we don’t need cars do all the driving under all circumstances to be incredibly valuable.
IS GETTING THERE THEN A QUESTION BRINGING COSTS AND FORM FACTORS DOWN FAST ENOUGH?
The fundamentals of sensing, modelling, planning a trip and driving are well understood. I know a thing or two about sensors since I used to build them myself. They are now in such rapid development cycles to become increasingly cheaper and more powerful. Overall, there are certain parts of it that are still immature, such as driving in bad weather, complex situations in which you have to deal with intersections, oncoming traffic, or driving at the physical limits at high speed, on ice or with reduced traction. But that should not distract from the incredible capabilities that are mature.
CAN YOU GIVE US AN EXAMPLE?
Take self-parking. When that feature was first introduced, I rolled my eyes at it. I recently used it in a low-end rental vehicle and was surprised how well it worked. Drive-by-wire technology for cars is a virtuous cycle. Each new feature that enters the market begets new features, whether it’s adaptive cruise control or lane keeping and lane change assistance. They build on each other.
WHEN WILL THIS TECHNOLOGY FOR AUTONOMOUS DRIVING BECOME MAINSTREAM?
I recommend an experiment to get an idea how far we’ve come. Get a new car that will prevent you from blindly backing from a driveway into oncoming traffic. Just step on the gas without looking and you’ll be amazed. Then, get into a 1950s pick-up truck. Experience the way the brakes and steering work – which is very slowly and imprecise by modern standards – and see how quickly you can get yourself into trouble just by driving around. It will let you truly appreciate all the features in modern cars that we take for granted. But contrary to what many might think, the automotive industry is not the early adopter of these technologies. Progress happens in areas that most people overlook: mining, construction and agriculture – industries that use large and heavy-duty vehicles for tasks like earth moving, rock crushing and road building, working with utmost precision.
It is astonishing how far we have come this past decade.
THOSE ARE SCENARIOS IN WHICH VEHICLES HAVE A VERY LIMITED RANGE AND KNOW THE TERRAIN VERY WELL. DON’T WE ALSO NEED NEW TYPES OF VERY DETAILED, ALMOST REAL-TIME DIGITAL MAPS OF OUR WORLD TO LET THE VEHICLES OF THE FUTURE GET AROUND IN FREE TRAFFIC?
It’s a chicken and egg problem. Before vehicles are autonomous, all the other cars moving about are already working as mappers and cartographers. Building those maps and models doesn’t require autonomous vehicles. The capabilities of vehicles as collaborative information gathering systems already exist to detail, refine and update the information needed. Again, it’s a virtuous loop in which new data can be quickly integrated when, say, a new construction site pops up somewhere on a road and influences local traffic.
AND HOW WILL THAT WORK?
Here’s a real-world example. Many truck accidents on U.S. highways are due to the fact that overpasses have different clearings, and they change depending on repairs, the surface, curvature or sinkage of the roadway. That data needs to be collected, inspected and verified, and when you have several lanes, we’re not talking about a single measure of clearance. That used to be done by human surveyors every decade or so. Now, it can be gathered by human-driven machines or vehicles that pass underneath these overpasses. They don’t even need to slow down as they help build detailed three-dimensional models of our roadways. A similar thing is happening with 3D modeling for the utility industries that includes detailed data sets about every light pole, every curb and so on. Many of these data sets have to do with tax and compliance issues such as roadway offsets or locations of buildings. The idea of using driven machines as information sources is a fantastic way of getting up-to-date information that’s also very fine grained. It’s happening right now, and it’s the tool of choice. We will see waves of innovation when it comes to details and the realism of 3D models.
CAN YOU DESCRIBE THIS WORK ON ALTERNATE REALITIES FOR VEHICLES IN MORE DETAIL?
We’re reaching the point where we can make digital models of our environment that are of higher quality and possess more detail than the human eye could ever see. It takes a lot of computation because it goes far beyond the existing practice of fusing range models with digital camera imagery painted over them. But increases in processing power and more data to build those dense, high-quality models are a foregone conclusion. It’s absolutely attainable. Red Zone, one of the ventures that I founded, is about gathering that kind of data with the help of robots to provide it to the sewer and piping industry.
HOW DO WE HAVE TO IMAGINE THIS KIND OF IMAGING TO BUILD DETAILED 3D MODELS?
Think of driving down a street or having a sensor pack scan the freshly cut rock surface in a quarry. You see only a few features with your naked eyes: the gray rock surface with a certain textures and accents. Machine vision can discern geometrical patterns in the rock that go beyond what we could ever see. The same is true if you think about driving down a road during the rain. Headlights help the human eye to see a bit better, but now imagine you could successively compute away the rain drops, the limitations with illumination. The model would paint details of the curb, the buildings, the infrastructure around you. I’m not talking about superimposing details like a head-up display does. The new 3D computer modeling will not paint a curb in your field of vision a bright colour, but will actually render the entire scene as you have never seen it before.
AND THIS WILL WORK WITH ON-BOARD SYSTEMS, OR DO WE NEED TO TAP INTO RESOURCES IN THE CLOUD AS FUTURE CARS WILL HAVE A PERMANENT BROADBAND INTERNET CONNECTION?
It can’t be done in real-time on board yet. But we have seen demonstrations that use the cloud to perform this task. Also, keep in mind it’s not a question of either-or, but combining the two. Computation accelerates generally and the technologies we need are getting much better and more powerful, so it’s only a matter of time. Take feature detection. The algorithms need to scale in order to register images up-close, at a distance and when they are rotated at an angle. It’s referred to as Scale Invariant Feature Transform or SIFT technology. During the DARPA Challenges, if we had reduced the resolution and made some other simplifications, we would have been able to perform this feature detection five times a second. With integrated circuits that are called Field Programmable Gate Array or FPGA, we can now do feature detection 50 times a second. Instead of using small images at low resolution, we can process larger images at higher resolution and at high speed. And that’s one algorithm of many. So, super-realistic modeling is attainable and it’s coming.
AUTONOMOUS VEHICLES NEED TO INTERACT WITH HUMAN DRIVERS IN OTHER CARS FOR A LONG TIME TO COME. HOW WELL WILL THIS MIX PLAY OUT?
It’s already happening. As automotive companies and others introduce all these safety features, they piece by piece assist us in certain situations or override human behaviour. In essence, these technologies make vehicles adhere to the rules of the road, no matter who is driving. Think about driving on a highway today. You’re going fairly fast, but you don’t have face-to-face contact with other drivers, you don’t communicate your intentions or need to explain what you want to do next. It’s all about behaviours observed and interpreted as we are moving along at high speed.
SO WE WILL REPLACE HUMANS GUESSING WHAT THE OTHER HUMANS MIGHT DO WITH MACHINES GUESSING WHAT OTHER MACHINES DO ON THE ROAD?
That observation and interpreting mode is compatible with all kinds of driving. It doesn’t need to distinguish between humans and machines driving. Honestly, there may never be a point at which the road belongs to humans one day and machines the next, or even a clear majority of one or the other. There will always be older vehicles or sports cars on the road without autonomous driving systems. But let´s be clear: Autonomous driving is already a done deal today and will continue to advance steadily.