First driverless fatality

MrAl

Joined Jun 17, 2014
11,474
They are not less responsible drivers. The semi-antonymous Tesla pseudo-autopilot is making them less capable drivers. It is simply impossible to change the attention behavior of human beings as a group by warnings in a manual when they see it work in their hands.

The equivalent driving skill of the Tesla car is really close to a kid with a learners permit (I don't think the car could pass a on the road drivers test) with you in the passenger seat ready to take over by slamming the brakes and/or yanking the wheel at the last moment. The responsibility is totally in the drivers hands but it would be IMO foolish to hand the keys of a sports car to a kid who can only handle less than basic traffic conditions and then let them drive at 80+ MPH in typical traffic while they are still learning to drive.

Hi,

You have a good idea there, They should pass a drivers test without assistance.
I just have to wonder if they have something like that, and if so, what it is exactly.
I can see that they did not think of everything therefore there will be accidents that happen because of them.
The claim to fame is that "eventually" the driverless cars will be more safe than human drivers so we just have to wait for the casualties come in. Maybe someday they will have it right.

It's going to take a wealthy someone's son or daughter to get killed by an autopilot until someone makes the standard pre-road test much much tougher.
 

jpanhalt

Joined Jan 18, 2008
11,087
Cite just one piece of evidence that the competence of any of the Tesla drivers was reduced compared to what it would have been without driver assist. Truth is, it was not a matter of driving competence, it was a matter of taking responsibility.

John
 

nsaspook

Joined Aug 27, 2009
13,265
Cite just one piece of evidence that the competence of any of the Tesla drivers was reduced compared to what it would have been without driver assist. Truth is, it was not a matter of driving competence, it was a matter of taking responsibility.

John
Can't you see that responsibility is not the issue here. I don't think that Tesla as a company or car is responsible.

I would site the case of the killed Tesla driver as possible (still under investigation) evidence of reduced driving competence. I've had hundreds of trucks pull the crossover move in a traffic gap while I was driving. My usual instinct is to slowdown as soon as the cab turns cross lanes but so far I've seen nothing that matches a typical alert driver response here. He might have been just a bad driver but his Tesla videos show trust in the machines driving ability that removed his instincts from the control loop.
http://jalopnik.com/does-teslas-semi-autonomous-driving-system-suffer-from-1782935594

This is a general man-machine human factors automation problem when the human is a 'back-up'. If you look at aircraft automation and 'autopilots' the skill/manual flying competence factor has been observed as decreasing.

http://flightsafety.org/asw/jul10/asw_jul10_p30-34.pdf?dl=1
 
Last edited:

jpanhalt

Joined Jan 18, 2008
11,087
I think we are not are not using the same definitions for competence and responsibility in the context of those accidents.

My interpretation is that competence is the ability to do a particular task. In the case of driving a Tesla, each of the drivers had passed a competency test for driving a car. The fact that they had a car with driving assist for a relatively short time compared to how long they had been driving non-assisted cars makes it unlikely that their basic competence was impaired.

Responsibility in this context is the decision whether to apply that competency to the task at hand. That is, a competent driver who takes his hands off the steering wheel or ignores the traffic situation is acting irresponsibly.

WBahn made an analogy to pilots earlier. Pilots are responsible for the safety of the aircraft and its passengers. Irregardless of whether there is an autopilot. "Pilot error" is the cause of most accidents. Note, that is not competence. A decision by a pilot to attempt something for which he is not qualified is a question of responsibility, i.e., pilot error. Ill-placed reliance on an electronic device is also a question of responsibility, not competence in my view.

With regard to the Tesla, a driver who has allowed his basic driving competence to lapse because of his reliance on driver-assist (an unlike event in the present cases), is both incompetent and irresponsible for driving while incompetent. The cause of the accident, however, was the driver's irresponsible decision to operate a vehicle while incompetent. The same theory applies to assigning responsibility to drivers who drive while impaired.

As for your link to aircraft, I agree that some pilots allow their skills to decrease because of their reliance on automation. The Air Asia crash in SFO is one example. However, I blame the pilots for that, not the autopilots.

John
 

nsaspook

Joined Aug 27, 2009
13,265
I don't think the drivers basic driving competence was impaired but while the driver bears total responsibility there is a human factors problem with the expectation of the driver in the control loop to maintain a proper level of total driving awareness when the machine seems to handle events in the instinctual time frame correctly. People say the Tesla autopilot makes driving less fatiguing but why is that? Maybe it's because driving is a complex task that's actually quite hard to be skilled at. As beginners or experts the entire driving task has the same level of difficulty but as driving competence increases the amount of processing that happens below the level of total awareness increases. If we start to eliminate the moment to moment background driving tasks with automation the driver starts to reduce the brain motor functions needed for those tasks and allow our immediate attention to wander. We feel less fatigued because we actually did less while driving because we (over)trusted the Tesla autopilot to handle those tasks.

The problem is that our true skill/competence level to handle instinctual time frame tasks has decreased because of the semi-autonomous vehicle “Handoff Problem” slowing a return to total driving awareness and reducing reaction time. This is human factors problem for level 2 systems like the Tesla not a direct hardware/software engineering issue.

https://www.researchgate.net/profil...ed_Driving/links/56b46a7208ae1f8aa4546201.pdf

Trust In L2 and L3 Automated Vehicles Automation across any domain has the potential to present difficulties when users’ reliance upon the automation is not properly calibrated to the performance. The relationship between trust and automation can lead to several different outcomes, as defined by Lee and See.24 Calibrated trust describes a system in which the user’s trust matches the automation capabilities. Calibrated trust supports appropriate application of the automation. Overtrust describes a system in which the user’s trust in the automation exceeds the actual capabilities. Overtrust can lead to misuse of the automated system, where the driver applies the automation to a roadway environment that is outside the automation operational scenarios. Distrust describes a scenario in which the user believes that the automation performance is less than it actually is. Distrust can lead to disuse of the automation, thus removing the possible benefits of the automation. Although trust in automation has been studied across a number of domains, the applicability of these findings to the driving task is questionable due to fundamental differences between the operational characteristics of the different automated systems. Perhaps the best analogy may be drawn from aviation. Modern aircraft have multiple highly automated systems onboard. Modern autopilot systems have the ability to automate almost all phases of flight. This allows the pilot to transfer to a monitoring role. However, the analogy begins to fail when considering the fact that airspace is somewhat controlled, routes are mapped, and the potential for other
airspace users to perform unexpected threatening maneuvers in very close ranges (i.e., less than 2 seconds) is not typically present. Therefore, while the lessons learned from other automation domains should be considered when approaching L2/L3 vehicle automation, they may not necessarily be directly applicable. More research about drivers’ trust in L2/L3 automation can be beneficial to OVI system designers and stakeholders. Ultimately, the issue of drivers’ trust in L2 and L3 automated vehicles is critical to how these technologies are both used and misused in the real world.
 

Attachments

Last edited:

MrAl

Joined Jun 17, 2014
11,474
Hello again,

Very interesting points.

In a word, monitor the driver as well as the road.

I have to second the point about the human tendencies. That is, once the human gets even a little accustomed to the autopilot driving the car with success after success after success, they will tend to ease off a little, attention wise.
At first i can see them being very very very cautious because of the newness of it. But after several weeks go by they will tend to forget what is written in the manual and forget that the system can and will make mistakes. They see it doing so well they have a tendency to slack off, even if for a very brief moment or two. I think it would be very natural for this to happen.
My question then would be, even though i dont want to argue this point, who is responsible for an accident where the system clearly failed. Given the human nature of the driver as cited above, that should have been taken into account long before any version was released to the general public. The system, being as sophisticated as it already is, should have been able to keep track of the drivers attention patterns in order to take some action such as a warning or slow down, or at least something that would tell the driver they are slacking off.
People also fall asleep at the wheel even in a normal car. How many more will fall asleep on a long drive home at night and cause a problem. If the human is supposed to keep track of what is going on at all times, then that status MUST be checked in real time. Without that kind of mechanism in place, we might actually see more accidents than the proposed less accidents. I also think that someone that could design a car that complex that can actually drive the car on busy roads most of the time without a problem, should be able to easily monitor the DRIVER too.
 

nsaspook

Joined Aug 27, 2009
13,265
The Tesla system does monitor the driver in a crude fashion. There seems to be a feedback loop tied to the amount of automation processing power (confidence interval) needed to maintain a safe zone that requests the driver hold (detected with a very precise torque sensor) the steering wheel after a variable period of time. After several alarms the system will start to disengage Autopilot and slow the car in hopefully a safe manner.
With Humans being Human we quickly discovered a easy way to defeat this.


A small hanging counterweight might also work or a cup holder on the side of the wheel for your beer.
 
Last edited:

MrAl

Joined Jun 17, 2014
11,474
Hi,

Necessity is the mother of invention.
That's typical :)

Well, on the most positive side, maybe this will eventually eliminate drunken driving because the driver wont be driving anymore. That's the most positive thing i can think of at the moment because bars have suffered business because of drunk driving laws which have gotten tougher over the years, arguably with good intent.
If that becomes so, anyone can go to a bar and get slammed and then 'drive' home safely. That would a big boom for the liquor industry and bars and inns and restaurants alike.

Hopefully this automatic system gets better before too many people die as a result of some dumb overlooking of a critical operational mode.
 

nsaspook

Joined Aug 27, 2009
13,265
Hopefully this automatic system gets better before too many people die as a result of some dumb overlooking of a critical operational mode.
I don't believe it was overlooked. If I could find the research then surely the Tesla engineering group could. IMO a deliberate decision was made. Elon Musk is far too intelligent to have not calculated the effects of releasing a level 2 automation system. It's my personal belief that in the strictly scientific term of gains in overall net safety he was right but I also think he like many engineering brain types underestimate the ability of people to act like idiots when given the opportunity.

Once again, humans are the problem.
 

MrAl

Joined Jun 17, 2014
11,474
I don't believe it was overlooked. If I could find the research then surely the Tesla engineering group could. IMO a deliberate decision was made. Elon Musk is far too intelligent to have not calculated the effects of releasing a level 2 automation system. It's my personal belief that in the strictly scientific term of gains in overall net safety he was right but I also think he like many engineering brain types underestimate the ability of people to act like idiots when given the opportunity.

Once again, humans are the problem.
Hi,

Well i agree that the designer was intelligent, no doubt there. But then we could argue that human error was NOT foreseen, at least to the degree required to ensure safe operation. Experience, as well as intelligence plays a big part in design, especially safe design.
While ok, it may not be strictly speaking 'error', it's still something that the human did that was not foreseen.
Now im not saying this is easy to predict either, but if these systems are to take over our roadways some day, they are simply going to have to get better at making decisions.

I am almost completely sure that you agree that some changes will come as a result of these early on accidents. That means we cant call the system fail safe yet because a system that is already fail safe doesnt need changes :)

If it was me, the first one i would look at was why the car did not stop after it hit several items. The second would be how to get the image recognition system able to detect objects in the foreground that match the color and shade of the background...something i've been careful about in my driving for years now. If you've ever seen a gray car blend into the background road surface which is also gray in the right light then you know what i mean.
Unfortunately i also dont think we can tolerate lasers pointing in all directions either, but maybe radar or something like that. It's gong to have to be very sensitive.

I have to say that at first i was totally against this kind of thing, but now i can see that if they can be made very safe then we stand to gain. Less accidents would be really nice to see, especially now that the speed limits are just plain crazy anymore.
Think of how this would affect our car insurance for example. The big Ins companies would not be able to say that there are more accidents now so they can raise the rate! Yippee :)
 
Last edited:

nsaspook

Joined Aug 27, 2009
13,265
If it was me, the first one i would look at was why the car did not stop after it hit several items. The second would be how to get the image recognition system able to detect objects in the foreground that match the color and shade of the background...something i've been careful about in my driving for years now. If you've ever seen a gray car blend into the background road surface which is also gray in the right light then you know what i mean.
Unfortunately i also dont think we can tolerate lasers pointing in all directions either, but maybe radar or something like that. It's gong to have to be very sensitive.
The system already has radar and Musk doesn't like Lidar systems because of the weather limitations with visible light. The radar system most likely 'saw' the tractor trailer across the road at a distance but because of false positive signals from overhead sign and overpasses triggering emergency braking the software defines a fairly narrow section of space as the car. This radar 'car' space seems to not include much above the hoodline of the car and up.
http://9to5google.com/2015/10/16/el...iving-car-doesnt-make-sense-in-a-car-context/

http://jalopnik.com/does-teslas-semi-autonomous-driving-system-suffer-from-1782935594
Tesla’s own literature seems to confirm this blind area, as their Autopark/Autopilot instructions include this:

Please note that the vehicle may not detect certain obstacles, including those that are very narrow (e.g., bikes), lower than the fascia, or hanging from the ceiling.

This one sentence manages to give a pretty good idea of the range that the Tesla system is capable of seeing: a horizontal plane of reality that’s about as thick as the car’s “face” and hovering about six feet above the ground.
...
The two wrecks where the cars impacted truck trailers happened because those trailers existed in the space above where the Tesla was able to detect; the front of the car went under the trailers because they simply didn’t ‘see’ the trailer, which occupied the space above the car’s field of sensory input, causing the cars to smack their own greenhouses into the trailers.
The vision system problem.
Mobileye, the company that makes camera-based computer-vision systems for autonomous driving, issued this statement about the wreck:

“We have read the account of what happened in this case. Today’s collision avoidance technology, or Automatic Emergency Braking (AEB) is defined as rear-end collision avoidance, and is designed specifically for that. This incident involved a laterally crossing vehicle, which current-generation AEB systems are not designed to actuate upon. Mobileye systems will include Lateral Turn Across Path (LTAP) detection capabilities beginning in 2018, and the Euro NCAP safety ratings will include this beginning in 2020.”
 
Last edited:

MrAl

Joined Jun 17, 2014
11,474
The system already has radar and Musk doesn't like Lidar systems because of the weather limitations with visible light. The radar system most likely 'saw' the tractor trailer across the road at a distance but because of false positive signals from overhead sign and overpasses triggering emergency braking the software defines a fairly narrow section of space as the car. This radar 'car' space seems to not include much above the hoodline of the car and up.
http://9to5google.com/2015/10/16/el...iving-car-doesnt-make-sense-in-a-car-context/

http://jalopnik.com/does-teslas-semi-autonomous-driving-system-suffer-from-1782935594


The vision system problem.
Hi,

So the problems are finally starting to come out of the woodwork :)
So it's not a self driving car then.
 

MrAl

Joined Jun 17, 2014
11,474
Hello again,

I see the problem is becoming more clear now. They are using terms and phrases that encompass a huge area of driving while only making the actual technology do a little bit of the driving under certain conditions. That might be what is confusing.
This might be caused by marketing or just plain old bad journalism. I am seeing more and more bad journalism these days not only in this area but in all areas of the news.

1. They cant drive the car everywhere, just some places like on the highway.
2. They only help with certain functions, like staying in the lane and preventing rear end collisions.
3. ?

So the bottom line is that they are not a free for all driving mechanism.

I am thinking now that the better solution would be to upgrade the infrastructure of highways to include feedback points to the automobile on board computers. That could tell the computer more and the more advanced the infrastructure is, the safer that highway will be. That would reduce cost to the auto maker also. Not sure how much this would cost each state, but that would help for sure.
Either that or we should just go back to horse and buggy :)
 
Last edited:

WBahn

Joined Mar 31, 2012
30,052
Sabotage is going to be an increasingly big problem with lots of systems, from cars (whether it be full autopilots or individual subsystems, such as brakes, on "normal" vehicles) to industrial SCADA systems, to residential utility metering to lots of others. These systems are seldom developed with the notion of adversarial players in the mix.
 

MrAl

Joined Jun 17, 2014
11,474
http://www.hackbusters.com/news/sto...to-sabotage-tesla-s-autopilot-system-roadshow

It's pretty easy to hack the human pilot system with a scantly-clad attractive female* walking down the side of a busy road.
Hi,

My car is programmed to pull over immediately and take pictures :)

Yes hacking is a problem. What cracks me up (pun intended) is that the people who are behind the anti theft stuff dont seem to know anything about stealing and hacking so how could they design something that will prevent it. Look at the recent change in credit cards now with the "chip". The chip was supposed to supply an "extra level of security" but researchers demonstrated that it's actually easier to get free money using a device that can be made with a Rasberry Pi. They dont have to clone a card now, they just have to use the Pi to simulate someone elses "chip" and they can get all the information they need to get all the money they want up to maybe $50k out of one machine in 15 minutes. Since it's so easy, they can do lots of cards and lots of machines. That's how nutty security is these days...they make it worse than it was before.
 

nsaspook

Joined Aug 27, 2009
13,265
Hi,

My car is programmed to pull over immediately and take pictures :)

Yes hacking is a problem. What cracks me up (pun intended) is that the people who are behind the anti theft stuff dont seem to know anything about stealing and hacking so how could they design something that will prevent it. Look at the recent change in credit cards now with the "chip". The chip was supposed to supply an "extra level of security" but researchers demonstrated that it's actually easier to get free money using a device that can be made with a Rasberry Pi. They dont have to clone a card now, they just have to use the Pi to simulate someone elses "chip" and they can get all the information they need to get all the money they want up to maybe $50k out of one machine in 15 minutes. Since it's so easy, they can do lots of cards and lots of machines. That's how nutty security is these days...they make it worse than it was before.
I have a 360 HD zoom camera for that.:)

In the US it's mainly chip and signature (or nothing) instead of always chip and pin for authentication. EMV is mainly a vendor protection (passing the liability to prove fraud, no EMV means the ATM owner or merchant is default liable for the fraudulent transaction) as it does little to stop fraudulent transactions.
https://www.wired.com/2015/09/big-security-fix-credit-cards-wont-stop-fraud/
 

nsaspook

Joined Aug 27, 2009
13,265
Sabotage is going to be an increasingly big problem with lots of systems, from cars (whether it be full autopilots or individual subsystems, such as brakes, on "normal" vehicles) to industrial SCADA systems, to residential utility metering to lots of others. These systems are seldom developed with the notion of adversarial players in the mix.
These guys were real pros. With cars becoming driving computers these types of hacks will only increase.
http://wwmt.com/news/nation-world/surveillance-video-showing-a-case-of-high-tech-grand-theft-auto



Even with all the negatives I'm still pro auto-drive technology as it will save lives.
http://www.cnbc.com/2016/08/05/man-...al.html?utm_source=dlvr.it&utm_medium=twitter
 
Last edited:
Top