Driver Assistance & the Path of Most Resistance
November, 2020
Drivers do not want to crash, nor should they have to. Features that provide Driver Assistance are in demand and on the rise. Consumers understand “emergency braking” and “crash prevention” as available features and they are increasingly becoming a primary purchase (consideration) factor for most new and many used vehicles. Many would say that they are the most important features in modern vehicles. They may also be the most concrete implementation of an intelligent system of sensors and software that has a direct cause and effect on our daily lives. The Driver Assistance (DAT / ADAS) functional domain is the cognitive on-ramp to autonomous comfort and trust. Or as the folks at J.D. Power like to put it, “Today’s experiences with a technology drives tomorrow’s desire.”
Unfortunately, our understanding of how these features operate and our general consumer opinion of them is not as positive as it should be. In an August, 2019 survey by JD Power, they reported that 23% of vehicle owners with Lane-keeping or Lane-centering systems found them irritating. Around the same time, the Center for Automotive Research (CAR) proclaimed that 60% of people with automated driver assistance systems are turning them off. In addition, a survey from Nationwide Insurance stated that, “In today’s high-tech automotive world, the challenge isn’t necessarily getting the features that you want; the challenge lies in understanding and mastering the features you have.”
The competitive hype and Marketing spins of these systems continue to grow as does consumer interest, but not consumer comprehensibility. For years now, we seem to be stuck in a holding pattern of limited understanding and appeal – despite the addition of 3-dimensions of visualization, more realistic vehicle renderings, high resolution, detailed pop-ups, motion graphics and more. In this case, the challenges of clarity, comprehension and affinity of use may not be improved by better graphics, including image resolution, animation and the use of color and sound.
And soon, most new vehicles will be equipped with the sensors and software to support advanced driver assistance, but you are going to have to incrementally pay for the next great feature enhancement. If consumers do not like or understand what they are getting today, then will they be willing to pay for the better next one? No doubt it will depend, but as we know, it is human nature to crave the next new tech - even if it will make our lives more complex. But will upgrades that have the same functionality and just make everything easier to use be free? In many of our overly engineered product categories, consumer’s are willing to pay more for simplicity and clarity of operation.
Let’s examine some root causes. What follows are some broader thoughts on how we got here and the approach that we might take to fix things. Step back and consider …
Should we be designing driver assistive systems that state when it is OK to drive hands free and when it is not OK? Or said another way, should we design systems that trade-off control of the vehicle between the machine and the human?
Should we be selling driver assistive systems that at times (when active) allow the driver to operate hands and feet free, but may at any time demand that the driver takes full control of the vehicle?
Should we be designing and delivering features that automatically drive the vehicle with limits without maintaining an awareness of the state of the driver?
Should we be delivering features that have more situations in which they do not work than they do work? (just as OEMs have for years with in-vehicle voice recognition / assistance)
Should we be enabling so much user input and variability to the way that an assistive system operates? (like following distance, speed control, amount over / under posted speed and general ability to activate or suspend or deactivate these features)
Will these partially effective and partially understood features do more harm than good to a brand in the long run? How about to the auto industry as a whole?
Can the machine ever really trust the human? Via a Pew Research Center study, 46% of adults admitted to reading or sending text messages while driving.
If the vehicle is “driving itself” and you are able to take your hands off of the steering wheel and your feet off of the pedal, then you should not have to be looking forward at the road all of the time. For that matter, you should not have to glance back at the road within intervals of 20 or say 37 seconds. These are the awkward and unnatural user requirements that are being introduced as the systems progress towards Level 4 and Level 5. The problem is that new car buyers (the most important and valuable buying group to global OEMs) are suffering this uncertainty and discomfort along the way. Even the actor in the latest Nissan commercial looks anxious as he goes into “Chill mode” and drives with his hands slightly off the wheel and foot off of the pedal. I apologize for generalizing, but the Engineers that drive our tech-informed culture have spoken. These same folks make statements like, “users need to adopt and learn to use our systems. They need to better understand how the vehicle behaves when the systems are active.” Should they really?
On Comprehension
Almost half of the consumers that drive a vehicle equipped with driver assistance features, do not clearly understand how they work and/or do not perceive them as valuable. Or as Tech Crunch referenced in 2019 headline, “Study finds drivers are clueless about what driver assistance systems can (and can’t) do.” Like many advanced features in vehicles today, drivers have notions and expectations of what a feature does, but it is rarely completely accurate. And this has proven to also be the case for many that receive real-time instruction from the system. It is important to stress “complete accuracy” because partial understanding can sometimes be worse than no understanding, can lead to wrong assumptions and can have grave consequences.
What buttons do I push to initialize the system?
What is the sequence of buttons to push?
How do I adjust the speed, the ability to lock onto / avoid the car in front of me?
Doesn’t it automatically set to a safe distance from the car in front?
How do I set a max speed or set a max over or under the posted speed?
Is it working? Why did my car just do that? (You get the gist)
Frequency of use matters for learning and comprehension. These features have been mostly designed for highway driving because those conditions are easier to master - even though most of our time is spent driving on neighborhood streets or in cities and towns. For the most part, consumers need to have a mindset of “trial and error” and a willingness to experiment with these features to really understand what they do and the nuances of how they do it. To-date, only Tesla owners will conceive of every potential “use case or edge-case” of a feature and then simulate, record and explain the system’s behaviors and the thresholds of every sensor and maneuver. The quantity and detail of Tesla feature behavior videos online is exhaustive, easily found and highly educational. Coincidentally, Tesla Autopilot also outperforms all other competitive features in measures of “ease of use”.
I propose that these consumer misunderstandings, wrong assumptions and points of confusion in regards to driver assistance are fundamentally a design problem. However, it is a complex problem – one that involves fixing many things - like their functional definition and operation, internal organizational structures, feature marketing, their user interfaces and contexts of use and possibly more. Industry competition, engineering-driven cultures, population safety and human mental models seem to be at odds.
That’s Mr. Co-Pilot to You
From Adaptive Cruise to Auto Cruise to Lane Keep Assist to Active Navigation to Lane Centering to Highway Assist to Speed Limit Assist to Traffic Jam Assist to Drive Pilot to Driver Assistant Plus to ProPilot Assist, despite their similar sounding names, these features do very different things. Some slow the vehicle down, some speed it up, some move it over in a lane, some lock it a set distance from the vehicle in front of it or to the center of the lane while others drive the vehicle completely at low speeds. Most OEMs are chasing more or less the same features and vehicle behaviors; however, these same features in vehicles of different OEMs have different vehicle behaviors (and) are operated differently by drivers. Not to mention that the variations seem to be increasing as OEMs and suppliers strive to satisfy the various “Levels of automation”, which consumer’s don’t understand or care about anyway.
We seem to be traveling the path of most resistance of getting to a point where it is safe to take a nap when any of these features are activated.
And if you don’t understand the individual feature, then you most likely will have even more trouble when your vehicle is acting in accordance with a second or third activated feature related to it. And just to pile on, some of these features have a relationship or dependency on each other and even another layer of adjustment when activated. It sounds confusing because it is. And if you can’t figure-out how to adjust it to your liking or maybe just your tolerance, then you turn it off. As a Consumer Reports analyst put it, “The needlessly complex interaction of multiple systems on the same vehicle adds to the confusion.” Ambiguity and unpredictability are the enemy of acceptance. Without acceptance, there will never be trust or desirability.
A lot is at stake because the progressions and promises of “autopilot” technology and self-driving systems (and their clear consumer understanding and desire) are as mentioned, probably the most important features for OEMs to get right. It is not that the OEMs do not know what consumer’s want or how they perceive or think about these “assistive” features, they do. Unfortunately, the progressive competitive technological march that they have been on for decades has resulted in these overly complex systems. It is almost impossible to undo the complexity of sensors, code, modules, interfaces, signals and more that got them to this point of functional capability.
It may also be impossible to undo the market value, communication strategies and internal cultures that support these paths. The variables, factors and choices made by each OEM include the types of sensors, what level of awareness each vehicle has, the computational power allocated, their short, mid and long-range technological capabilities and how they translated into feature and then positioned and released. Not to mention the high service costs and the lessening of our own abilities that will result. As Consumer Reports stated, “As driver tasks are removed, a driver is more likely to lose situational awareness and could be more easily distracted.”
On Driver State Monitoring / On Driver Awareness
The advantage of monitoring the driver with cameras and/or sensors is to recognize distraction, drowsiness, lack of attention or even consciousness so that the vehicle can intercede if needed. If the driver is distracted by their mobile phone and a text message and a stop is approaching, then the vehicle should pre-condition or pre-load the breaks to make an evasive maneuver or stop abruptly. Perhaps a vehicle recognizes that the driver is distracted and “ratchets-up” the level of potential notification - including distance, volume, visibility and other multi-sensory cues.
The systems that are going in new vehicles today are a significant investment in driver attention monitoring and are there mostly to check if the driver occasionally glances “eyes forward” or on to the road. You can drive hands-free, but you need to be awake and looking forward. And with a little bit larger investment, these systems could tell if the driver is eating, drinking, reading a book, sleeping, grooming or doing one of many other already recognizable and identifiable behaviors.
Vehicle fleets, their managers and their suppliers are leading the way to making driver’s more aware and accountable while at the same time giving management vast amounts of data and visibility. They are employing tactics like …
Identification of risky driving behaviors like mobile phone use, eating, drinking, smoking, not wearing a seat belt, speeding and general inattentiveness
Identification of unfocused driver attention or fatigue or drowsiness
Tracking and quantifying both the duration and the percentage of drive time of risky behavior, therefore better understanding the persistent risk
Providing access to their own driving records, videos (if opted-in) and stats for reflection, self-improvement and self-correction
Also look towards the design of micro-mobility vehicles and management services for insights into alternative feature design and new paradigms of interaction. While fleet vehicles provide a greater degree of driver and vehicle monitoring and analytics, micro-mobility vehicles have an increased speed to market, a flexibility of implementation, ease of upgradability, and a mindset of experimentation and improvement from generation to generation. Improvement comes to these systems in weeks and months rather than years.
Considerations for the Future
I don’t have the right answer but I do suggest that there is a better way. The sensors are installed but the dependencies, the functional logic and the required human operations need to be improved. This always starts with a better understanding of humans - what they want and need, how they think about these features, how they anticipate them to work, how they work-around problems and more. If a large quantity of drivers have learned to use their turn signals to “game the system” and override an active system (LKA) just so that they can appropriately avoid bicyclists on the side of the road, then there is a problem.
I anticipate that the commercial fleet world will lead the way in driver recognition, behavior modification of both humans and machines, and in the definition of intervention protocols. These commercial efforts (which include traditional automotive suppliers and software companies) will also lead the way in defining new paradigms of man-machine interaction. It is safe to assume that a Digital Assistant or Virtual Brand Agent may soon be doing more intervention as well as the actual driving. Some additional areas of exploration for the future might include …
The use of audio tones as the primary feedback for state of driver assistance features
A dedicated highway lane for vehicles with “pilot-like” assistance capabilities
One button or control interaction for On / Off, Disengagement / Engagement during a drive session. Do lane and speed controls really need to be separate?
Automatic engagement based on context and patterns (like the Segway eMoped C80 Auto Cruise feature)
An “always on” state and putting much more focus on “collaborative driving”
If my phone is “acting-up” (or) my laptop starts to slow-down (or) my vehicle’s infotainment system freezes, then I perform a “reboot”. It’s interesting that that is still the go-to fix for so many of us. And if you and I have ever worked together, then you know that I like to blame the $#!@ Fairy for causing (mostly digital) breakdowns. Despite knowing the system and maybe the organizational and financial reasons why things do not work as expected, it is always easier to blame fairies. But not in this case. It may be time for that proverbial Reboot (or said another way, a Creative Leap) in the planning, architecting, enabling and functional presentation of Driver Assistance to consumers. And increased pace and reduced friction of mass consumer adoption of EVs and AVs may depend on it.
These opinions are mine alone and do not represent any OEM, Supplier or any other company in the Automotive or Mobility industries. Sources of valuable information on these topics that informed these ideas came from JD Power, IIHS, the University of Windsor, Human Systems Lab, Euro NCAP, Consumer Reports, NHTSA, Pew Research Center and recent survey’s on Reddit.