The Susan Kuchinskas white paper on the iterative approach to autonomous driving has been released in anticipation of the June 2015 Telematics Update Conference in Detroit. It suggests that although driver assists will be added to cars in the near future, a car operating without any human intervention may never be in widespread use. Those planning, designing, and operating transportation infrastructure will still have to consider human factors and limitations.
Levelling Up to Driverless Cars
By Susan Kuchinskas
It’s very clear now that automakers are taking an iterative approach to autonomous driving: gradually improving the capabilities of their advanced driver assistance systems (ADAS) year after year while fusing them together both technically and via product names. BMW and Mercedes-Benz have semi-autonomous offerings on the market, and seven other OEMs are on track to have highly automated systems that include traffic jam assist and automated highway driving by 2020, according to Frost & Sullivan. No one really wants to use the A-word, that is, “autonomous.” Ask the folks who are plotting their roadmaps when we’ll be able to fall asleep and let the car take us where we’re going, and they respond, “Never – except maybe for niche situations.” They’re much more comfortable with the current “levels” concept – even though they don’t always agree on which level is which. “Automakers are still trying to figure out what autonomous means in the near future, at the five-year and ten-year marks,” says Kumar Krishnamurthy, a partner in the IT practice of Strategy&. Some automakers are moving to driverless cars driven by external pressures such as the hype around Google’s well-publicized work on self-driving cars. Others, he says, are taking more targeted approaches, focusing one thing at a time, such as automatic parking. And at least one automaker he’s talked to is betting that drivers will always want to drive. “Where those strategies are going to land is unclear,” he says. General Motors’ Super Cruise will be introduced in one 2017 Cadillac model, and GM is a case in point for the iterative introduction process. “We’re doing Super Cruise as a big step toward autonomous driving,” says John Capp, director of global safety strategy for product engineering at General Motors. The foundation was the driver safety package that’s now almost standard in all Cadillacs. Capps says that Super Cruise builds on some of the sensors, radars and cameras by letting them communicate with each other. “It’s a realistic next step.” GM’s Super Cruise provides an illustrative example of how ADAS features are merging. According to Capp, much of the input from the sensors is going to a central computer powered by Nvidia processors. This central computer compares data from different sensors to improve reliability. On the other hand, Capp says, “There’s also a trend where some of these ADAS features have become simplified and commoditized, so some of the brains will be on the individual sensors. This will enable us to deploy them more broadly and put them on more vehicles.” And then, there are Tesla and Google. Tesla has begun shipping cars with the hardware for what it calls “autopilot,” with the aim of delivering the functionality later over the air. Tesla autopilot will include active safety features to avoid collisions from the front or sides, as well as lane departure avoidance and safe lane changing with activation of the turn signal. Even the hyperbolic Tesla specifically says that this will not enable driverless cars. Google, as always, is going its own way; Frost & Sullivan analyst Prana T. Natarajan expects the search advertising company to release a fully autonomous aftermarket system in 2018.
Locating the brain
According to Natarajan, automakers that are actively developing highly automated driving systems – aside from Google – are using nearly identical arrays of sensors. The differentiation, he wrote in a recent report, is likely to be in reducing the cost of sensors and refining the automated driving experience. Eventually, more sophisticated algorithms will reduce the number of sensors needed. OEMs are taking dierent approaches as to where those algorithmic “brains” will be: On individual sensors, within sensor modules or in a central computer. Delphi, which needs to be agnostic in its approach to sensor fusion in order to serve its variety of customers, offers a module that handles sensor fusion within the component. Its Radar and Camera Sensor Fusion System (RACam) integrates radar sensing, vision sensing and data fusion into the module, enabling a suite of active safety features that carmakers can choose from, including adaptive cruise control, lane departure warning, forward collision warning, low speed collision mitigation, and autonomous braking for pedestrians and vehicles. It recently announced that the Volvo XC90 will use RACam for its advanced driver safety package including automatic braking at intersections. John Absmeier, director of Delphi’s Silicon Valley Lab, says that RACam is simply a packaging choice, not a technology strategy: RACam also can facilitate fusing sensor data in a more central computer. He adds that one strategy is not better than the other. “Each of our customers has their own vision and roadmap for their architecture. We have to be flexible in that regard,” Absmeier says – although he adds that providing this in a single package makes it more cost-effective and easier to integrate with the rest of the vehicle. Bosch is another tier one that is putting some of the brains on sensors. In Bosch’s 2015 Chrysler crash prevention package, the long-range radar sensor is the centerpiece of the automatic distance and speed control systems. The radar contains two levels of processing, one for the sensor data and another for the actual functions, including adaptive cruise control and forward collision warning. Still, “there’s no single answer to the sensor fusion question,” explains Jan Becker, director of engineering, automated driving, chassis systems control for Bosch. For example, some Audi models have Bosch systems with two radar sensors; the data from both is integrated via one of the radars. For upcoming systems, this data integration will typically remain a function of the radar. However, Becker says, “For higher automation, we do foresee currently that it will require significant change. For those systems, we are developing future electronic control units that will do the processing of all the sensor information to provide 360 degrees of visibility.” Aggregating data from multiple sensors and then processing it within a central computer to provide an accurate picture of conditions is the third approach. Danny Shapiro, CEO of Nvidia, says this approach increases accuracy and reliability by eliminating false positives, especially with the addition of vision recognition software. But this approach takes immense computing power. Nvidia’s idea is one “supercomputer” constantly crunching data from the car’s myriad sensors. “Having the ability to centralize that creates a more reliable system and a more efficient system from a total cost perspective,” he says. He does not mean that there will be a single computer handling all of the car’s functions, however, and infotainment will probably not merge with advanced-safety or autonomous features. “There will be a blurring of the lines,” Shapiro says. For example, the ADAS system’s front-facing camera with computer vision might read speed limit signs and display that information on the instrument cluster or head-up display. In such a case, the infotainment system is part of the driver-assist system. According to the Nvidia CEO, it will be a while before a more open infotainment system is directly linked to vehicle controls. That’s partly the legacy of the way automakers are structured, with different departments handling these functions. But there are also safety and security concerns, he says. The infotainment system may have weaker security and could be used as an entry point for hackers gaining access to vehicle controls remotely.
The ADAS systems from Delphi and others are advanced, but sensor function and reliability still need to be improved. Absmeier says that while usable radars are now in production, visual perception systems such as cameras and LIDARs are nowhere near ready for production. Cameras still don’t work well in certain light conditions or where there are occlusions. LIDARs still cost thousands of dollars and don’t perform well in conditions like snow, fog or heavy rain. Delphi’s Absmeier says, “We are doing a great job of building these building blocks, but getting them to the point where they are mass-marketable still has challenges.” Each type of sensor has strengths and weaknesses, notes Bosch’s Becker. He adds it’s necessary to develop the correct combination of sensors that are both reliable and highly robust in all use cases so that the drawbacks of one are compensated for by the strengths of others, especially as we move toward more autonomy, letting the driver relax a bit or focus on other tasks. But Becker says, “Sensors are currently not robust enough to allow the driver to go out of the loop.”
Fail-safe times two
Becker foresees the need for a new systems architecture for highly autonomous vehicles. “Once we allow the driver to go out of the loop, the requirements on reliability specifications for actuation systems go up significantly,” he says. Yes, vehicles on the market today are designed to be fail-safe, but if a component fails, it fails into a safe state, Becker points out. “Future vehicles will need to be designed to be fail-operational.” That is, if one component fails, the vehicle’s automatic systems still need to operate. For example, if a component responsible for steering autonomously fails, something else needs to take over and keep the car on the road. The simplest but most cumbersome and expensive solution, according to Becker, is simply installing two of everything. Instead, Bosch is analyzing every component to decide if it needs to have a backup or if another component can be used to create redundancy. One example is Bosch’s automatic braking systems. These systems currently use Bosch’s electronic stability program (ESP) to actuate braking. Bosch also makes iBooster, an electronic control that boosts braking power. Becker explains, “We can develop a system that combines ESP and iBooster in a way that we have two redundant braking systems. By having two different braking systems, we can make sure they don’t show the same failure cases.” In a similar fashion, Bosch could use a braking system to provide backup for the steering system, braking individual wheels to control the direction of the car. Continental is also grappling with the fail-safe issue, according to Steen Linkenbach, director, systems & technology for Continental Automotive Systems. Its approach is to have backup systems in case the autonomous driving system fails. “If we have a fault in one system, we’ll have a backup system. This is the systems architecture coming out of our functional safety analysis,” he says. “The key is to start thinking about what happens in case of failure. If we allow fully autonomous driving, we have to take care of the safety of system. If you have a fault in automated driving mode, we need safety features independent of this automated driving.” For this reason, Continental is starting with automated safety features, he says, and then adding functionality on top of it aiming toward highly automated driving.
One of the most important sources of up-to-date external information will come from dedicated short-range communications among cars, known as DSRC or V2V communications, according to Andreas Mai, director of smart connected vehicles at Cisco. “DSRC gives the vehicle a bit more time to react to dangers that may not be sensible by onboard sensors,” he says. Updated maps are another important element of advanced autonomous driving that still must be worked out. Even Google’s very expensive and autonomous demonstration car can only drive in areas that have been pre-mapped, and it reportedly has difficulty navigating in rain and snow. In fact, truly autonomous cars will need to be able to draw on a wealth of external data and assimilate it, most likely, partly in the cloud and partly aboard the vehicle. Delphi’s Absmeier says even highly accurate GPS alone may not be enough to meet all driving challenges. “Even though the camera can’t see the line on the road — or there’s no line on the road –GPS has to [know whether I] am still on the road or not. You have to add other systems like mapping.” There are plenty of other data points he foresees automated cars making use of, including not only the weather, road conditions and traffic, but also information about surrounding vehicles and drivers. Access to neighboring vehicles’ health reports would let the car’s computers assess, for example, whether worn brakes might impair the ability of a car in front of it to brake in time to avoid a hazard. Cisco’s idea is to pool all these data sources into a central repository in the cloud that could be accessed for making real-time, automated-driving decisions. Mai, the Cisco connected car executive, says that pieces of information would come from public databases, insurance companies and automakers that monitor vehicle health: “Those pools of knowledge need to be merged to be virtually one pool of data that you can analyze.” Cisco is in the process of creating a virtual database of relevant information for autonomous cars. In fact, according to Mai, such a database is technically possible today; it’s the infrastructure for ubiquitous connectivity that’s holding this back. In light of the vivid debate about who owns customers and data, Cisco’s view is that it’s necessary to create a marketplace or exchange for consumer data. “The person who owns the car could decide what data he wants to share with whom, for what value. This could be the source of a lot of innovation: Enterprises could actively bid for getting access to your data.” Cisco thinks its data virtualization technology is a critical enabler for such a marketplace. Meanwhile, Mai says there are plenty of entities jockeying to run the marketplace, including automakers, tier ones and wireless network operators.
When the vehicle is in autonomous mode, it’s far from clear just how to get the driver’s attention when they need to take over. Continental has been demonstrating its Driver Focus system that uses a camera to track the driver’s line of sight in order to tell whether or not they are looking at the road. Another open question is exactly when the autonomous system should hand control back or alert the driver of upcoming conditions. Continental’s Linkenbach says, “Nobody knows right now what is the best way to hand over the driving task to the car and get it back again. That’s why the systems on the market do not let the driver be completely out of the loop.” Because it’s likely impossible to be able to ensure the driver can take over quickly, as well as impossible to design autonomous systems that canhandle any eventuality, Linkenbach believes we’ll never see fully autonomous passenger vehicles in wide use. Nvidia’s Shapiro agrees. He says, “I think we will have cars that can drive themselves in a number of situations before 2020. I’m definitely less confident in having the steering wheel-less car. The technology will be there, but I don’t think the other infrastructure and legal systems will be ready.”
Regulation: the biggest barrier
GM’s Capp says that some models will have V2V technology in 2017, whether or not NHTSA has mandated it by them. “We’re being supportive of rule-making,” he says. “We wanted to start the ball rolling and do some learning on our own.” GM has chosen a vehicle that has some practical aspects that make sense for V2V, just as the Cadillac CTS made sense for Super Cruise. “Initially, these will come on two different products, but you can guess they’ll end up merging down the road. Sometimes you have to spread out the pain a little bit,” Capp says. Delphi’s Absmeier says one thing that’s stalling regulation is simple, coherent definitions of the different levels of automation. NHTSA has defined four levels, the Society of Automotive Engineers has defined five, and Germany’s Bundesanstalt für Straßenwesen (BASt) defines five levels that are slightly different still. Absmeier says, “It’s hard to talk about a technology when everybody has a different idea of what it means. There is starting to be some convergence, which is a wonderful thing.” In fact, he sees most of the work on autonomous technology among these bodies as focused on those definitions, adding, “I don’t think there’s any discussion yet about regulations beyond that.” The SAE ISO 26262 functional safety standard covers advanced safety systems, and most automakers and tier ones seem comfortable that it will be extended to cover true autonomous functions. In that case, the industry’s existing standards and processes will work. Becker says firmly, “Nothing really changes in the future from what we have today. ISO 26262 prescribes certain standards for the safety of technical systems and will apply to vehicles in the future. We’ve managed to bring very safe vehicles to market in the past and I don’t think anything needs to change for higher automation.”
As we get closer to delivery of models with full autonomy, will auto sales lag as consumers wait for the full monty? Or, will consumers who buy 2019 models feel shafted that they didn’t get the latest, greatest technology? It’s possible. In October, Tesla got pushback from consumers who bought its Model S right before the release of autopilot capability. The San Jose Mercury News reported that 600 Model S owners had signed a Change.org petition asking for a retrofit. Tesla takes the same approach to upgrades as software companies do: it constantly adds new features instead of updating in subsequent model years, an arguably old-fashioned concept that is making it diffcult for traditional OEMs to respond to changes in consumer tech. Many of Tesla’s updates can be provided to existing owners by way of over-the-air upgrades–but not autopilot. Situations like this are, of course, inevitable in the automotive business, where each model year brings new enhancements. The bar is set higher for Tesla because of its rabid fan base of wealthy technophiles, referred to in some circles as rich geeks. But it’s a fair question: If consumers know self-driving cars are two years down the road, will they hold off buying human-controlled models? Another concern is whether consumers even want a “self-driving car.” GM’s Super Cruise is marketed as an enhanced version of a feature that consumers are very comfortable with, cruise control. In keeping with the Cadillac brand positioning, GM CEO Mary Barra positioned this as a luxury feature in her ITS World keynote: letting the car do the work for you. GM’s Capp may not bring out even an evolution of existing technology every year, but rather, offer incremental enhancements to existing systems, such as Cadillac Super Cruise. He adds that the carping about the slowness of the automotive industry to embrace technology is unwarranted. “Chill out a little bit. There’s a technology issue, and there’s customers rolling along with these things,” he says. “Adaptive cruise control is just starting to penetrate the marketplace. Even when they’re available, are people going to buy these right away?”