The Autonomous Car That Won’t Be — What Google Wants
Take a drive through Mountain View, CA, Austin, TX, Kirkland, WA or Phoenix, AZ and you will see them. Little Google cars driving themselves around town. Their new concept car doesn’t even have a steering wheel. Just an emergency stop button. But Google (or, more accurately, Alphabet) is careful to call them “self-driving” or “driverless” vehicles, not “autonomous.” That’s because they won’t be autonomous. Not only will they be communicating with other vehicles to pool their observations and coordinate actions, they will also be connected to — and largely controlled by — a cloud-based Traffic Control System (TCS). As with today’s driverless car trials, there will be human traffic engineers sitting behind desks observing, but the real-time decisions to optimize traffic flow will be made by a deep-learning artificial intelligence.
It is not likely that Google/Alphabet will ever manufacture or sell vehicles beyond the prototype phase. They will license patents and designs, probably tied to the use of an “Android for Vehicles” operating system. These vehicles may have the capability for autonomy, but generally only for the possibility that they are temporarily unable to communicate with a traffic control system (TCS).
In most circumstances, at least in metropolitan areas, the vehicle will be an intelligent mobile node within a much more intelligent traffic control system, which itself will be part of an even bigger Smart City. The Smart City control system is the real prize that Alphabet, Microsoft, and others will be competing for. Alphabet has a huge advantage, particularly for the traffic control system. An effective TCS depends on mobile device operating systems, driverless cars, accurate maps, comprehensive street-view images, and deep-learning artificial intelligence (AI). Alphabet has all of those. Just as we have seen Google/Alphabet sponsoring fiber and wireless broadband deployment, I believe we will see them paying cities to install or upgrade traffic lights, cameras, road-embedded sensors, railroad systems, and mass transit systems that will integrate into their traffic control system.
Imagine how much more efficient traffic will be in such a smart city! Deep-learning AI will not be programmed by humans, other than for safety over-rides. Otherwise, we will program it by our travels. Vehicles will register their destination, and the TCS will calculate the (near)-optimal route based on all the current traffic in the area. Prefer to take a specific route? No problem. Just don’t count on it being the fastest route. Traffic lights will not be timed, but will be commanded by the TCS for maximum city-wide traffic optimization. Driverless cars, following commands from the TCS, will speed up or slow down to join into clumps, which will arrive at an intersection one or two seconds after the light turns green. The only cars waiting at traffic lights will be those driven by humans who wrongly estimated that they could catch up to the clump ahead of them. Lanes will be cleared for emergency vehicles well in advance of their position. If a area or city-wide evacuation order goes into effect, all lanes will be used for exiting traffic, except for lanes left open as necessary for emergency vehicles, and all traffic will be routed for the most efficient evacuation.
Relatively few individuals will ever own a fully driverless car. Vehicles will be owned and operated by transportation service companies like Lyft, Uber, and any number of new entrants. Click a button on your smartphone, and one will pull up to your curb a little before you are ready to leave. Depending on your desire, it might be an inexpensive commuting model, a luxury vehicle, or an SUV. Most will be electric vehicles that you will never need to worry about charging, but you might also call a gas/electric hybrid to drive you from San Francisco to Yosemite. Or one that also allows manual driving, that you can take it “off the grid.”
What happens when you step into a driverless car for a trip across town? If you choose to identify yourself, screens around the inside of the car will automatically respond to your preferences. The kind of jazz you like will start playing. Information about your investment portfolio will appear on one screen, news on another. Another will show you where your kids are. Or maybe the online game you like to play. Or the show you have been watching. On all of them, of course, there will be custom-tailored ads. Unless you pay to turn them off. Alternatively you can turn off the screens, recline the seat, and take a nap to soothing sounds of nature. A chime will announce your arrival, along with the sponsor message: “This nap brought to you by [insert corporate sponsor here]”.
So what is Google/Alphabet after? Same as always: information, and ads. They want to know everything about you, to anticipate your needs, connect you to whatever information you want, facilitate what you do, and serve up advertising custom-tailored to your predilections and preferences. Even if you choose not to reveal your identity, the AI in the cloud will quickly determine what kind of person you are by what you look at (yep, the screens will be watching your eyes) and what music you choose.
The meta-data will be valuable as well. Google/Alphabet (AlphaBorg?) will know what a typical daily traffic pattern looks like in Metropolis. If something atypical happens, it will get flagged for more AI or human attention. Another AI would be able to run scenarios: what if we widened this road? What if we built a new bridge here, or there? What impact will all those new apartments being constructed have on traffic? Other information has commercial value. Which restaurants and clubs are most popular, when? Why is daily traffic from the financial center to XYZ corporation going up? Just saying, lots of interesting information in traffic patterns.
How likely is the scenario above? I’d say 100%. Just a matter of timing. Isn’t it always?