Why automation has limitations in emergencies

Status
Not open for further replies.
Originally Posted By: Shannow
https://www.controlglobal.com/articles/2017/why-automation-has-limitations-in-emergency-situations/

Pertinent to the autonomous vehicles threads, but I'll put it here as a standalone.


I work with automation (Watson) a bunch. I think it can supplement and replace redundancy eventually with jobs but ultimately requires an expert(operator) and team to make it work well. Basically an expert in field remains works with technology to help related knowledge team and enhance it. The out of work plant operators can seek work in other plants or working to enhancing the tech.

747's used to take a group of 3-4 to fly them. They eventually reduced off the flight engineer with addition of computers aka automation.

No replacement just transition.
 
Imagine three lanes of 18 wheelers "Scramming" in a snowstorm and all attempting to pull into one breakdown lane.
 
Originally Posted By: eljefino
Imagine three lanes of 18 wheelers "Scramming" in a snowstorm and all attempting to pull into one breakdown lane.


Yeah I felt SUPER safe on the interstate during a flash snow and ice storm at night that completely covered up the lanes causing no easy indication of what lane you are in, while Cletus McPeterbilt and Jim-Bob Kentworth continues to do 70 in a 65.
 
Last edited:
27.gif
Snow ! snow aint nothin'. I was on I-77N down by Wythe, Galax area. Picture a 3 lane highway 2/3 up the side of a mountain. . Rainy. Then fog. Fog so dense you couldn't see the lines.Sunday afternoon on a Memorial day weekend. I-77 is a major truck route. If you stopped, you risked being rear ended. It was a long 15 minutes/5 MPH crawl.
 
Originally Posted By: Shannow
https://www.controlglobal.com/articles/2017/why-automation-has-limitations-in-emergency-situations/

Pertinent to the autonomous vehicles threads, but I'll put it here as a standalone.
Quote:
Automation becomes plausible when we know the answer in advance, just like simulations showed how flight 1549 could have safely landed at an airport.
If I recall, it only "could have" by immediately heading for it. With no delay for assessment, understanding what happened, assessing the situation, etc. With that time delay factored it, all of the simulations 'crashed' before touch-down. So did the human pilots in the simulator.

I also seem to recall it took 10-15 trial runs even to get that right....

Another problem with 'self-driving-automation' systems is they aren't capable of conceptualizing or understanding objects. To them, it's only data and then a guess. As reported, the Uber system alarmed on a plastic bag and totally failed on detecting a human.

If a cardboard box was in the road and you're driving a truck, you may not care and run over it. In a car, you may swerve if you can or put it between your wheels and pass over it. Same with a 2x4 or rock or lifejacket. But a human actually sees those objects and recognizes them for what they are; a vehicle automation system can't do that. They can't tell a dog, from a cat, from a coon, from a squirrel, from a box from a bag.

That's a huge gap... and a huge problem.

Even trains are not automated and they ride on rails....
 
Originally Posted By: sleddriver
If a cardboard box was in the road and you're driving a truck, you may not care and run over it. In a car, you may swerve if you can or put it between your wheels and pass over it. Same with a 2x4 or rock or lifejacket. But a human actually sees those objects and recognizes them for what they are; a vehicle automation system can't do that. They can't tell a dog, from a cat, from a coon, from a squirrel, from a box from a bag.


Too true...here's what Google's most advanced imaging AI thinks of the world.

Dumbells have appendages (arms), because those are always a part of them...not knowing where one starts and the other stops.

29BFEBDB00000578-0-_here_s_what_one_neural_net_we_designed_thought_dumbbells_looked-a-19_1434663448505.jpg


http://www.dailymail.co.uk/sciencetech/a...e-world-it.html
 
Agree Shannow -

AI has its place, but it's not a panacea of perfect (nor is a human for that matter).

AI and visual recognition rely on certain critical inputs. Often, AI and electronic visual ID cannot be reconciled. Example ...

Take a jar of 100 house keys from all various manner of sizes/shapes. Now dump them in a pile onto a table.

A human, if familiar with the key, can pick it out of the pile by recognizing even part of the key. AI and electronic recognition cannot do that; it needs to see certain criteria for shape recognition, size, etc. A human can "adapt" it's perception where it's too difficult to program all manner of variation into the AI.

However, AI and electronic visual aids can process things FAR faster than a human. If you took those same 100 keys and ran them down an assembly line, the AI could detect a defect much sooner and with more consistency than the human. If the keys were passing a visual inspection point, the AI could process them at several keys per second, whereas the human would have to take a few seconds per key.



It's a trade-off.
 
Last edited:
Really good analogy there.

Back last century one of our University assignments was to fevelop a prgramme using pixels and percentages rank the printing performance of the labelling on a Mylanta (antacid) bottle, at various "angles of attack" and shadow.

Was relatively easy to give the bottle a perfection score.

The computer never ever read the label...nor understood it.

Modern AI will replace GPs in the very near future, as the AI can pick trends that a GP can't ever even associate, and a thousand of them can't even share. It will be probabilistic diagnosis, and probably on the whole population hugely more accurate.

But it won't keep you alive after a car crash, that's what people do.
 
Remember, a human took like 12 years of continuous training / learning to be able to perform well on most basic stuff at work. We have AI with a relatively raw calculation complexity (compare to human brain) and only train it for like what, 100k photos to teach it what an object is or is not.

Test pilots died all the time back in WWII when everyone was trying to build the best aircraft and pushes the limit. However when the arm race is over we ended up with great airplanes today. I think AI's capability will be the same. Like madRiver said we will not get perfect and unfortunately AI needs to fail (sometimes result in death like the Uber and Tesla's tragedies) in order to grow.

If a blank computer with AI can be trained to recognize hot dog (just for laugh there's an app for that) after around 150k images, they can recognize your house key out of the 100, or read the Fing manual, or warning label, or drive a car, or cardboard, or plastic bag, or a pedestrian crossing the street. It just won't do it right the first time, every time. Nor would every human in the world.
 
Status
Not open for further replies.
Back
Top