Does it matter what they were thinking? Hindsight is always 20/20 and that is why these systems should not be allowed on public roads. Most have a false belief that all these autonomous programs are AI and therefore consider all possible inputs and choose the best course of action. It's a view carefully fostered by these companies. It was quite clear in the discussion when the incident first happened. Many came to the defence of Uber stating the the AI must've taken the best course of action and no one should be liable.
A program, even a self learning one, can only perform what it was programmed to perform and at any point in time the software can be altered to modify the response depending on whatever criteria the software engineers deem important.
In this particular case they deemed important to minimize false alarms and overall smoothness of the ride over the actual detection because despite all the sensors and the supposed "AI", their autonomous software still has problems differentiating humans over a trash bag, dog or some other random obstacle and assigning a priority of reaction.
Most drivers do it in milliseconds and instinctively. When we see a bag, we just run over it, it requires no extra thinking or reaction time from us. When we see a human, we don't need to run complicated algorithms to make sure it is actually a human and not a trash bag, it's seamless for us, but not for machines.