This is going to be quite edge case
I have an image with a trampoline and half transparent net.
First it identified it as a boat. OK not really worried about that… But when it does that the detection process does not seem to scan further into the image on the detected area.
So this is an example seconds later where it didn’t detect “boat” and detected persons behind the net.
I think this is a bug in DeepStack… or maybe a setting? If it finds something in a region it ignores it from further identification.
It does not seem to ever show me boat, person when the region is clipped like that by a larger object. (I compared to Azure Vision and the same frames show consistent data for people - but it ignores the “boat” and does not identify the trampoline)
If somebody knew how they could set up in the frame to cause a the majority of the frame to be obscured by a false positive and not detecting people in the frame