Near-Miss Encounters Are A Troubling Dilemma For AI Autonomous Cars
By Lance Eliot, the AI Trends Insider
For those of you that are sports historians, you might recall these famous remarks by the legendary baseball player turned manager Frank Robinson: “Close doesn’t count in baseball. Close only counts in horseshoes and hand grenades.”
There’s another place that closeness counts, namely when two cars encounter a near-miss.
Fortunately, a near-miss is considered something less severe than an actual collision, though do not falsely assume that a near-miss is a freebie and without any adverse consequences. A near-miss can generate all sorts of calamity and lead to quite serious and harmful outcomes.
Allow me a moment to tell you about a friend of mine that was a near-miss expert, as it were.
Back in the day, a college buddy of mine was famously known for being an especially hasty driver. He drove much too fast while on the streets within and surrounding the university campus. He also veered dangerously close toward other cars that were in the lanes next to him. Plus, he relished taking corners at high speeds, including frequently forcing pedestrians to leap back from the curb to avoid getting demonstrably clipped.
All in all, anytime you saw his car, you gave it a wide berth, even while parked since he notably would back out of parking spaces without looking over his shoulder. I recall one time that while riding in his car, I pointed out that his rearview mirror was broken and was unusable, for which he calmly stated that there wasn’t any need to have a working rearview mirror since he never sought to look behind his car anyway.
Grimace followed by a weary sigh.
Here is the weirdest part of the sordid tale. He never got into a car accident.
Nary a car collision or striking anyone or anything. He managed to drive in this crazed and seemingly maniacal way, and yet never struck another vehicle, nor hit a lamppost, and never bumped into or scrapped a human or any meandering animals. His driving thoughtlessness consistently occurred throughout all four years and somehow, miraculously, despite this nutty, adverse, and undeniably despicable driving approach, he came out with a perfectly clean driving record.
Yes, that includes the fact that he never received a ticket for any of his daredevil driving endeavors.
We always assumed during those several years that ultimately a police officer would spot his driving digressions and certainly issue a citation of one kind or another. Clearly, the official DMV driving code states that you cannot drive menacingly and thus he was driving on borrowed time in our view. He was inevitably going to get a ticket, perhaps a series of tickets. The hope was that after getting stopped by the cops, he would realize the error of his ways and discontinue his driving exploits.
No such luck.
Here’s what the California DMV vehicle code says about reckless driving: “A person who drives a vehicle upon a highway in willful or wanton disregard for the safety of persons or property is guilty of reckless driving.” Anyone nabbed for reckless driving is subject to prison time and financial penalties.
In terms of luck, my buddy was extraordinarily lucky that he didn’t get caught. Everyone was exceedingly lucky that nobody got hit or injured by his carelessness.
Borrowing again from sports, there are car drivers who ought to be measured by their so-called havoc rating.
A havoc rating is used in sporting competitions such as football and indicates how disruptive a player is toward another player (or one team toward another team). Generally, you want to achieve a high havoc rating whenever in a sports game, since it suggests that you are confounding the competition and forcing them off their best game. In driving, the goal would be to achieve the lowest possible havoc rating. Drivers with especially high havoc ratings would be dinged for being inappropriate in their driving behaviors (such as having to pay heightened car insurance premiums or face other similar penalizing facets).
One argument that my pal often used was that he ought to not be considered a bad driver or an irresponsible driver due to the admittedly indisputable fact that he had absolutely no car crashes. Go after those dolts that ram into other cars or that strike down pedestrians, he exhorted. Until the day that he hits someone or something, anyone carping about his driving was seen as obnoxious, overbearing, and wasting his time.
Does that seem like impeccable logic? As long as you don’t crash your car, can you drive in whatever manner you choose? Some valiantly tried to explain to him that his wild driving was a detriment to everyone else, whether he realized it or not.
I suppose you have been endangered by other drivers and felt the stresses that come with having someone else nearly hit your car. You might need to radically maneuver to avoid a seemingly imminent collision. Your heart rate climbs through the roof, as it were, and you sweat profusely until the pending incident magically passes and no one is impacted. The impact was avoided due to your strenuous efforts, and not due to the idiot that veered into your path.
This brings up a fascinating aspect of those drivers that are reckless. In an oddball way, the better driving of other drivers might be said to save those careless drivers from doom.
Imagine something akin to an Xbox or similar game that has one driver madly driving down the street while all other drivers manage to steer clear. Pedestrians, too, refrain from crossing the street or getting anywhere close to the wanton car. Maybe dogs and cats know enough to remain out of the street too. In this made-up simulated world, all others know to avoid the one that is the reckless and feckless driver.
Perhaps that explains how my college buddy survived.
All those times that he edged toward the precipice, other nearby drivers were able to navigate away from him. The honed skills of other drivers saved his hide. From his viewpoint, the machinations of those other drivers were either not apparent to him or he simply shrugged it off. He frequently complained about other drivers as being reactive and nervous Nellies. At no point did it occur to him that it was his own actions that sparked other drivers to act in that manner.
Sadly, some drivers such as my collegiate buddy live in their own world and do not seem to note nor care about others on the highways and byways. If anything, they hold great disdain for other drivers. A bit ironic given that they are the ones that should bear the brunt of the disdain.
In looking at the near-miss topic and taking a somewhat more holistic perspective, one consideration is the potential for a cascading effect and the second-order and third-order outcomes that can arise.
Here’s what that portends.
A driver causes a near-miss. The driver that was nearly hit is forced to make a rapid avoidance maneuver. This maneuver leads to crashing into some other car. On the surface of things, the second driver is presumably at fault, assuming that no one knew or could prove that the first driver started the chain of events. In a similar fashion, it could be a further arm’s length chain, such as the first driver prods or stirs the second driver, the second driver disrupts a third driver, and the third driver strikes a car that is the fourth driver in this sequence. The original impetus is obtusely remote and nearly invisible to anyone other than a dedicated observer.
Shifting gears, let’s ponder what the future holds for drivers and reckless driving.
As we enter into an era of self-driving cars, your first thought might be that thankfully there will no longer be any reckless driving, presumably being eradicated due to the advent of AI-based driving systems.
Maybe yes, but, surprisingly, maybe no.
Here is the question for today’s hearty discussion: Will AI-based true self-driving cars potentially drive recklessly or will the AI-powered driving be perfectly undertaken and avert any semblance of near misses or similar driving involving almost calamities?
Let’s unpack the matter and see.
For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/
Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/
For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/
For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5 while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/
To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/
The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/
Self-Driving Cars And Near-Miss Driving
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.
Generally, most of the self-driving cars being tried out on public roadways are subject to two categories of reporting about their driving record, consisting of indicating car crashes and also reporting on disengagements.
Disengagements consist of the circumstance wherein there is a back-up or safety human driver present and the human acting as a second-in-command driver decides to disengage the AI during a driving task. This disengagement can be undertaken for a variety of reasons, including the possibility of a near-miss situation and for which the human back-up driver decided to take over the driving controls rather than relying upon or hoping that the AI could handle the predicament.
Okay, so in theory, we will almost certainly be informed whenever a self-driving car gets into a car crash.
That deals with discovery about how often self-driving cars are striking others, though let’s be careful and clear that it might or might not be the fault of the AI driving system. If a nearby car being driven entirely by a human opts to ram into the rear end of a self-driving car and assuming the AI was not at fault, the aspect that a car crash occurred involving a self-driving car should not improperly be interpreted as due to an (essentially) “irresponsible” AI driving system per se.
In terms of reporting about disengagements, we won’t necessarily be able to ascribe all of those disengagements to the AI as being a bad driver. The back-up human driver might have jumped the gun, deciding to relieve the AI of driving when perhaps the AI could have coped with the driving situation. Also, the disengagement might occur because something about the car itself has gone awry, such as a tire gets blown or the vehicle is making clanging noises and the back-up driver is worried that perhaps the tailpipe has come loose.
In short, there is no particular requirement or indication that we are likely to receive or be made apparent about the near-misses of the self-driving cars and the AI driving systems.
A car crash is obviously not a near-miss, so the counts of car crashes don’t help to identify how many near-misses there have been (I suppose, tongue in cheek, you could call the car crashes as didn’t miss or as anti-misses).
Disengagements might contain some proportion or amount of near-misses, though ferreting out those stats is bound to be somewhat problematic (partially since there is no standardized and all accepted stipulation of how to report on disengagements, an industry wide concern that I’ve expressed in my column postings).
Unnervingly, the AI driving systems could pretty much drive just like my college buddy. As long as the self-driving cars managed to avoid getting into a car crash and also once there are no longer back-up or human safety drivers present, an AI driving system could routinely be a menace to the rest of the driving world and get away with it, presumably.
No one would be the wiser.
Of course, this implies that if indeed the AI driving systems were driving poorly, the rest of us, those nearby human drivers, were helping to prop-up the AI by maneuvering to avoid those wayward or reckless self-driving cars, just as perhaps other drivers were doing for my college pal.
Right now, since there are so few self-driving cars on the roadways, especially ones that have no back-up driver present, it would seem somewhat unlikely that a self-driving car could go scot-free as a bad driver. You would expect that other threatened drivers would readily post on social media the transgressions of a self-driving car that veered toward them or otherwise seemed menacing. We all are prone to holding our tongue about other bad human drivers, but that would not seem to necessarily extend to the capacity of AI driving systems and self-driving car antics.
On the other hand, an argument could be made that many drivers are avoiding getting close to self-driving cars, precisely because they are unsure of what the AI might do as a driver. In that manner, you could claim that self-driving cars are being given undue latitude. A self-driving car might be able to drive in a weaving manner and do so without any human drivers nearby being endangered directly, simply because those human drivers already were keeping a distance from the self-driving car, to begin with.
Though that seems like a compelling viewpoint, some human drivers are actively going out of their way to challenge self-driving cars when they see those machines on the roadway. I’ve referred to these as roadway bullies, wherein they love going aggressively toward self-driving cars to see what the AI will do. Furthermore, since most of the AI driving systems are programmed to be relatively civil and timid as drivers, some human drivers have realized this aspect and gladly take advantage, cutting in front of self-driving cars and pulling driving tricks with glee upon the more docile AI driving approaches.
Consider another angle on the topic.
It could be that self-driving cars are performing near-miss driving activities and yet the automaker or self-driving tech firm or fleet owner is not monitoring for such driving behaviors.
This seems unfathomable.
We all would naturally assume that those responsible for the self-driving cars being on the roadways would abundantly be checking for near-miss occurrences.
One perspective is that there’s no need to do so. Just as my college buddy retorted that as long as there weren’t any crashes, it could be that there is no particular basis for monitoring that kind of activity. There is plenty to do already, involving keeping self-driving cars in a drivable state, doing bug fixes, adding new features, and so on. The resources need to examine the roadway trials and inspect for near-misses could be readily placed lower on the list of priorities. Given that resources for self-driving car efforts are already stretched thin, dealing with near-misses could be considered an edge or corner case issue that can be dealt with further down-the-line.
Some AI developers would also challenge what it means to incur a near-miss.
On any given day, when humans are driving on say the freeways, they come within inches of each other, while moving along at 65 miles per hour. You could argue that all of that driving is somewhat reckless. All it takes is for a driver to veer over a little and end-up scraping against another car. There is a case to be made that we routinely are reckless in that overarching way.
How do you draw the line of where reckless driving starts and ends?
You could put together algorithms that try to make such calculations. The tough thing is to winnow the wheat from the chaff. In theory, there are so many possible near-misses in any normal driving journey that you either would need to conclude they are all near-misses or somehow find the needle in the haystack that perchance is what we intuitively would reckon is a valid near-miss per se.
Another consideration is that inches alone is not a sufficient means to scope a near-miss. For example, a few years ago I was driving on a highway when suddenly, wholly unexpectedly, a car from the opposing lane came into my lane, and we were shockingly going head-to-head. I had only a few seconds to decide what action to take. Though we never ended-up colliding (the other driver veered back into their lane), you could call it a near-miss based on the time-to-decide and time-to-act urgency involved.
One approach voiced about determining near-miss circumstances is to consider the reactions of other drivers.
Remember that earlier I mentioned that perhaps other drivers were being protective of my buddy by not allowing him to hit them? Well, the sensor data that is collected by the self-driving car cameras, LIDAR, radar, ultrasonic, and so on, could be inspected mathematically and computationally to ascertain how often other vehicles are seemingly changing course while nearby the self-driving car. It could be that those other drivers are doing so because they are trying to avoid getting bashed by the AI and the self-driving car.
In that case, you don’t need to gauge the self-driving car driving in of itself, and instead focus on the reactions of other drivers and their car driving actions.
This again though is not entirely satisfying since there are other reasons that those human drivers might be doing this kind of action, including (as already indicated) a general apprehension about being near a self-driving car. You might also find of interest that some drivers stay clear of a self-driving car out of a semblance of respect, just as you might if there was a car with a dignitary riding in it and you opted to steer away from the vehicle (not because of a fear of getting hit, but instead due to respecting the vehicle or its rider).
For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/
On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/
I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/
Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/
We take for granted that today’s human drivers can perform near-misses and there is little that can be done about it. Unless a police officer happens to witness the incident, the matter will be nothing more than a fleeting act, as though a pebble bouncing off the surface of a flowing stream and never more to be given a moment’s thought (or, worse still, erupt into a moment of road rage and extremely adverse results).
How many near-misses arise during a year of driving and are converted (one might say) into the approximate 6.3 million car crashes in the United States, along with the corresponding 40,000 related human deaths and the 2.3 million humans injured?
One important point about a willingness to focus on near-misses is that even Level 2 and Level 3 semi-autonomous cars could be augmented with detection that might better alert human drivers and reduce the number of near-misses, which in turn might reduce the number of actual crashes. Thus, near-misses can be important for not just true self-driving cars but also pertain significantly to lesser capable cars too.
Yet another fascinating aspect is that since self-driving cars are chock-full of sensors, and assuming they record what they see and detect (I’ve coined this the “roving eye” of self-driving cars, which has distinct advantages and also potentially sour consequences), they might be able to aid in the detective work underlying a cascading near-miss chain of events. Essentially, a traffic situation including numerous self-driving cars might have sufficient recorded data in the aggregate to better pinpoint who started the chain and how it unfolded.
Doing more about near-misses seems indubitably prudent for all concerned, and one supposes that if AI ever becomes sentient, such an AI would assuredly ask why we let those near-misses happen without repercussion or resolution. Let’s hope we have an answer by then and will no longer be in the game of near-miss abundance that today is only sub-optimally being caught and dealt with.
One last thought to ponder. Since different AI driving systems will be driving in differing styles and manners, we might end up with some AI-based self-driving cars that are prone toward doing near-misses, and meanwhile, other brands that contend with those driving antics and fortunately aid in averting mishaps.
You can just hear the complaints already, of one brand of AI talking shop and grousing about the other AI that is clueless about driving and ought to have its head examined.
That’s quite a future.
Copyright 2021 Dr. Lance Eliot. This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
Credit: Source link