Google’s Self-driving Car is Smarter Than You Think

A white Lexus hybrid SUV inches to the left, creating a slightly wider buffer as it passes a bicyclist in the bike lane on a busy and unseasonably hot Tuesday afternoon not far from Google’s headquarters in Silicon Valley.

If the car was being driven by a human, that would be no big deal — except to share-the-road advocates. But this is Google’s self-driving car we’re talking about, and that seemingly unremarkable maneuver turns out to be one of the highlights of a day spent with members of Google [X]’s Self-Driving Car Project team.

“People hate driving,” Self-Driving Project director Chris Urmson said at a press event held at the Computer History Museum in Mountain View, Calif., near Google’s headquarters. Once you get to work in the morning, “it takes 30 minutes to decompress from that jackass who cut you off.”

Google’s self-driving car is an ambitious project that hopes to end human error behind the wheel with a very Googley solution: software. The tech titan’s robo-cars have logged more than 700,000 hours since it began working on the vehicles in 2009, and expects to have them ready for public use between 2017 and 2020.

And make no mistake, combining cars and human error has been a catastrophic recipe. Not only are more than 33,000 people killed annually (PDF) in the US in car crashes, but such accidents are the leading cause of death (PDF) for people under the age of 45.

The goal, as Urmson describes it, is to imagine a world where cars are safe.

“Google is uniquely positioned to solve this problem,” said Dmitri Dolgov, the software lead on the project.

“There’s a whole research field in taking a map and comparing it to your position,” he said, which is essentially what the project does.

But Google’s self-driving cars do that on a vastly more complex scale, because here “you” are a multi-ton vehicle hurtling through the real-world “map” at velocities fast enough to pulverize, if not kill, on impact.

The map that drives the Self-Driving Car Project

At the heart of the technology, what separates it from other sensor-driven autonomous vehicle projects, is a Google-made topographical map that gives the car a sense of what it should expect. The map, different from Google Maps, includes the height of the traffic signals above the street, the placement of stop signs and crosswalks, the depth of the sidewalk curb, the width of the lanes, and can differentiate lane markings from white and dashed to double-yellow.

The cars depend on this prebuilt map, which is why their urban excursions are limited to Mountain View for now, but the project’s lead mapping engineer, Andrew Chatham, made it sound like the goal is to wean the car off such heavy reliance on the map in the future.

“We’re certainly relying less on the perfect accuracy of the map as time goes on,” he said. “We’re also improving our ability to build the maps.”

The software solution that Urmson, Dolgov, Chatham, and their team are betting on can be seen in action in Google’s recent wireframe video (see above) of what the car “sees” as it moves down a street. It combines the prebuilt map with real objects detected by its laser-powered Lidar and camera systems.

These objects can be in motion, such as vehicles, pedestrians, and bicyclists like the one that the car I was in gave more room on the road to. In California, where motorcycles can legally “lane-split” by riding along lane markers, the car will make room for the motorcyclist if there’s extra space in the lane for the car to give. But the car takes note of construction, potholes, and other immobile obstructions, too.

“Instead of having to rebuild the world from scratch every time we turn it on, we tell it what to expect when it’s empty and then respond to when it’s filled,” Chatham said.

Chatham told CNET that the car’s mapping does not use technology related to Project Tango, Google’s 3D mapping tech for smartphones, although he has “heard of it.”

Although the Google team was reluctant to quantify the amount of data that the cars produce, they did explain that all the information from the roof-mounted Lidar, strategically placed cameras, and assorted sensors gets processed through Google’s machine-learning algorithms to spit out essentially two numbers: how much to throttle, and what angle to turn the steering wheel.

Miles and miles to go

Watching the car execute a perfect left-turn from inside the vehicle as our driver, Google’s Ryan Espinoza, kept his eyes on the road but his hands in his lap was the first major test I experienced in the car. It’s something that you know a self-driving car must do, but requires awareness of so many variables — oncoming vehicles, lane width, the ability to smoothly accelerate through the turn’s arc — that it was impressive just to see it done.

The system is so advanced that Urmson said that the Google cars can even avoid placing themselves in other vehicles’ blind spots, a remarkable feat that most human drivers can’t manage, but he cautioned that the cars still need work.

That was evident from the moment I squished myself into the middle back seat of the Lexus SUV, flanked by two other reporters. The car’s front leather seats were occupied by a Google driver and “co-driver,” a team present in every Google self-driving car that hits the road.

The driving teams spend nearly eight hours a day, every day in Google’s two dozen self-driving cars. The Lexus SUV hybrid I rode in appeared to be a late-model RX 450h, rated for 30 mpg — best in class, according to Lexus’ website. At an average of $4.20 per gallon in Mountain View, estimating six hours of actual drive time to accommodate for stops, and assuming an average speed of 30 mph, Google is spending around $600 to fuel its self-driving fleet every day, five days a week.

The driving pair performs two tasks. As you’d expect, they’re there to take control of the car in case of an emergency. There’s even a large, custom red button about 2 inches across and mounted to the right of the gear shift that can be hit to disable autonomous control instantly, although our driver and co-driver, Espinoza and Nick Van Derpool, said that they couldn’t remember a time when they had to use it except to test that it worked.

The second task is to track the car’s progress. While the driver sits in the driver’s seat, hands and feet idle except when called upon to take over from the computer, the co-driver sits with a laptop displaying the real-time wireframe of the world around them, as constructed by the roof-mounted Lidar. The co-driver logs both well-executed maneuvers and situations where the car could do better. That information is then fed over a high-latency mobile data connection into a database that is used to improve how all the cars handle road situations.

It’s that reliance on human input that highlights how much further the car has to go, despite its successes. While the car I was in knew to stay nearly four car lengths back from a Mini Cooper that swerved into our lane unexpectedly until it could begin to predict what its path would be, our trip began with Espinoza manually driving us out of the Computer History Museum parking lot and onto La Avenida Street. Once there, he was enable to activate autonomous control, but who wants a self-driving car that can’t get itself on the road?

Another problem that the Google car has yet to conquer is weather. Urmson said that the car can handle heavy rain and fog about as well as a human, but that high-velocity freeway driving and rain are problematic. The team has not yet tested the cars in snow. Given the lofty goal of making driving safer, a system that’s as good as a human driver isn’t going to cut it.

Where do we go from here?

Use cases are easy to imagine. The elderly and people with disabilities, even temporary ones, will be able to get around more independently than they can today. The challenges of getting people in suburban areas to and from public transportation hubs, often referred to as the “first and last mile,” could be obviated by fleets of self-driving cars.

And just as the Tesla Model S has a “frunk,” a front trunk, where the nonexistent internal combustion engine goes in other cars, widespread use of the self-driving car could eventually lead to a complete redesign of the driver-centered car, or change how we develop cities just as we approach a point where globally more than 50 percent of people on Earth live in them.

But as with many of Google’s moonshot projects, there’s more to the story than just figuring out the technological solution. The success of the self-driving car raises questions that the Google team isn’t working on.

How are self-driving cars insured? Who pays if a robo-car is involved in a crash? What happens when there’s a 90 percent reduction in car-caused deaths? Are self-driving cars coming too late for the US, as annual miles driven has begun to dip below the peak of 3 trillion from a decade ago? How do you prevent self-driving cars from getting hacked? And what happens to all that data that Google and its self-driving car competitors will be collecting on their passengers?

At least for the last two, the Google team offered concrete answers.

“One of the things about Google is that we have an incredible resource in security,” said Urmson, citing Google’s computer security work in the Chrome browser and advancing encryption. “We’re bringing some of that experience” to the cars, he said.

“There is a big red button [in the car],” said Chatham, “[but] there is no silver bullet for security. We take a multilayered approach.” That approach, they said, means that even if a hacker can get access to the car, additional security measures prevent a command like “turn left now” from being successfully executed.

They declined to specify further what those measures were. And what about Google’s pervasive need to convert everything about you into data?

Chatham said, “Right now we’re not sharing the data with anyone.”

Urmson said, “Right now, the data is used exclusively to improve the vehicles,” and then later added, “We want to be careful with all of our customers’ data.”

The fear of our vehicles becoming the largest deployment of robots was depicted hilariously on last weekend’s episode of “Silicon Valley,” where a self-driving car that looked suspiciously like Google’s set a destination for a tiny desert island 5,000 miles away and pulled into a shipping container to get there.

But the reality is that these cars are coming, and quicker than you might think. Volvo just announced in Sweden that it’s building a system to allow autonomous cars to run a 35-mile loop around Gothenburg for 2017. While Google’s plans call for a window between then and the end of the decade, Urmson has a different reason to beat that deadline.

Urmson noted that teenagers are terrible drivers and the statistics reflect that.

“I have a 10-year-old son, so I have six years to get this done,” he said.

 

Author: Seth Rosenblatt
Source: Cnet