DeepStack New GPU Version

Hello Everyone,
web are excited to share that the new GPU version of DeepStack is available.
This new version is significantly faster while using less gpu than any previous release.
And comes free with no need for activation.





Note that, this version works on both avx and noavx systems.

Your feedbacks are welcome and we look forward to further developing deepstack in partnership with you all to serve you better.


Thanks for the updated gpu version.

Unfortunately, x4 and x5 never detect any objects on my setup. I know cpu-x4-beta had a similar issue, and it appears to affect gpu-x4 and x5. Is anybody else seeing the same thing?

My docker command is docker run --gpus all --restart=always -e MODE=High -e VISION-DETECTION=True -d -v /mnt/deepstack:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:gpu-x5-beta

1 Like

Hello, thanks for reporting this. What type of response do you receive, is it no detections with a 200 status code or a 400 response or the server hangs without a response?

I did a little more investigating, and now I feel stupid. gpu-x5-beta works great, it’s just that the confidence tends to be lower than x3 (which in my eyes is great).

I saw “no detections” when testing end-to-end with home assistant and this plugin

My problem is that since x3 was more confident, I was filtering out anything with less than a 97% confidence score, so I wasn’t detecting anything. After testing with curl, I noticed that x5 was less confident, so I lowered my threshold and everything works great.

Keep up the great work!

1 Like

That’s really great to know. This feedback is very important and we shall be sure to add a guide on migrating from the old DeepStack to the new DeepStack.

For some reason, with all the GPU versions, the Detection endpoint does not get started “Detection endpoint not activated”. The correct environment variables are set on the container. It works fine on the CPU version.

Any ideas what this is happening?

1 Like

I love this and and I am very appreciative of the work that has gone into DeepStack. I am however having some issues.
I am getting a lot of “invalid image” notifications on deepstack:gpu-x4-beta. I can verify its not a timing issue as I can stop the process that sends images to Deepstack, copy an image in my watch folder and then start the process. The image is sent and Deepstack returns an invalid image when trying to detect a face (about 50% of the time) . I also get an error when I use these images to try and register a face. Not sure whats going on with the images. Some work, some don’t and I can’t tell any differences in the ones that process and the ones that fail. All are 1920X1080 @ 96dpi coming from my doorbell camera. I am also getting a lot more false positives on face recognition (when the images are not tagged as invalid). I am running the container with the High mode setting and have trained at least 10 images each for the 2 people I want to recognize. The system however picks up random strangers with the same confidence levels it picks up the people I want to detect (75-80%). Any suggestions are greatly appreciated as I am using this to unlock my front door when my children get home from school in the afternoons. I am not really concerned about how much of the cpu/gpu the process uses as its the only thing running on a dual xenon server with a GTX1080 so I am willing to try pretty much anything to improve the confidence while keeping the read time low enough to be usable for my purposes (.2ms or less is my current read time per image)
I need to keep the read time down low as I am also sending images when motion is detected by my patio door to tell if my dog is waiting to go out or come in (and then I trigger the sliding door to open.)
Thank you

1 Like

Hello! I want to update but I don’t get where should I run that command.
My server is using port 80 so I tried entering http://localhost/v1/vision/deepquestai/deepstack:latest and similars but didn’t work.


Hello @rogerquake, what command did you use to run DeepStack ?

Hello @JohnH, thanks a lot for these details. Kindly run deepstack:gpu-x5-beta as it fixes some issues with invalid image responses. Also, deepstack:gpu-x5-beta introduces a number of enhancements to the face apis. Try this out and would be happy to diagnose any problems that occur.

Hello @mrpie, open a terminal and the command

sudo docker run -e VISION-DETECTION=True -p 80:5000 -v localstorage:/datastore deepquestai/deepstack:cpu-x6-beta

or if your server has a gpu

sudo docker run --gpus all -e VISION-DETECTION=True -p 80:5000 -v localstorage:/datastore deepquestai/deepstack:gpu-x5-beta

@john , perfect. Thank you, I will apply that update now.

Ended up getting it working, all good!

1 Like

@john I am not getting the invalid image errors, but the face detection is way off - or maybe I misunderstand how it works.
I have only 2 faces registered in my database , my son (5 years old) and my daughter 9.
A 40+ year old woman just walked up to my door and Deepstack decided she was a 81% match for my son (thats about the results I get when it actually is him 75-85%).
I have 12 or 13 images of him trained… I am running gpu-x5-beta mode:high How can I correct this?
Do I need to train more images of the 2 people I want to recognize? If so how many would be suggested as best practice?
Is the problem that I am using this wrong? Will this always match to someone in the database now?

Thank you again for all the work that work that goes into this, I look forward to your suggetions.

1 Like

@JohnH Please refer to this comment.

So i seem to have it running, but im getting a 403 error. any ideas?

You’ve started deepstack in VISION-SCENE mode, but you are posting to object detection endpoint. Try starting deepstack with VISION-DETECTION=True instead.

Im getting the same thing LOL.

This is what AI is showing

this is what its showing for my GPU
And docker

Well i let it sit there and after a while it started working on its own LOL. Dont understand but ill go with it. Although it is working faster, I had to reduce the image size to get it down to the 100 ms range.