Originally Posted By: Shannow
https://www.controlglobal.com/articles/2017/why-automation-has-limitations-in-emergency-situations/
Pertinent to the autonomous vehicles threads, but I'll put it here as a standalone.
Quote:
Automation becomes plausible when we know the answer in advance,
just like simulations showed how flight 1549 could have safely landed at an airport.
If I recall, it only "could have" by immediately heading for it. With no delay for assessment, understanding what happened, assessing the situation, etc. With that time delay factored it, all of the simulations 'crashed' before touch-down. So did the human pilots in the simulator.
I also seem to recall it took 10-15 trial runs even to get that right....
Another problem with 'self-driving-automation' systems is they aren't capable of conceptualizing or understanding objects. To them, it's only data and then a guess. As reported, the Uber system alarmed on a plastic bag and totally failed on detecting a human.
If a cardboard box was in the road and you're driving a truck, you may not care and run over it. In a car, you may swerve if you can or put it between your wheels and pass over it. Same with a 2x4 or rock or lifejacket.
But a human actually sees those objects and recognizes them for what they are; a vehicle automation system can't do that. They can't tell a dog, from a cat, from a coon, from a squirrel, from a box from a bag.
That's a huge gap... and a huge problem.
Even trains are not automated and they ride on rails....