Deepstack windows GPU not running custom models?

So I’m trying to run custom models on the latest Windows GPU release but it does not seem to work.

There is no API url posted when deepstack starts as it is running the default “vision-detection true” parameter.

And i only get Error 404 not found when i try the expected url: http://127.0.0.1:8181/v1/vision/custom/cats2

I have been trying this for hours, is it a known issue?

EDIT: I forgot to mention that i tried one of the pre-trained custom model files linked in the FAQ (openlogo) and got the same result, so it’s not my model-file that’s messed up.

1 Like

@MissMusic I can see from your run command that you forgot to add the path to where your custom models are. Your command should look like the sample below

deepstack --VISION-DETECTION True --MODELSTORE-DETECTION "C:/path-to-custom-models-folder" --PORT 8182 --MODE HIGH

Is “–VISION-DETECTION True” needed when running --MODELSTORE-DETECTION?

Check the image again, there’s two instances of DeepStack running, one with standard --VISION-DETECTION (no path to custom model) to show that deepstack is working in standard configuration, and another window with --MODELSTORE-DETECTION on a different port with the path to the custom model specified.

1 Like

@MissMusic you don’t need to -VISION-DETECTION True to run custom models. To run only custom models, your command will be like the one below.

deepstack --MODELSTORE-DETECTION "C:/path-to-custom-models-folder" --PORT 8182

Oh, so to the folder only, and not the .pt file!

1 Like

Yes! Now it works, using the path to the folder, not the .pt file, and for some reason it wont start properly either way in CMD, but it now runs in Powershell. ¯\_(ツ)_/¯

Thank you!

1 Like

You are welcome @MissMusic . To avoid others experiencing this confusion, the documentation for this has been updated.

Great!

Btw, i’m having issues running in mode high with my custom model. Deepstack starts but all images times out. :frowning: Running in normal mode is no problem. And running vision detection on high is also no problem.

  • What is the model-type of your custom model?
  • What is the specs of your machine?

I believe my model type were the default for the trainer, but I’m not sure, i honestly don’t remember. I’m training a new model right now, using YOLOv5m with image size of 640, default settings, to test.

So I tested with my new model trained with “python3 train.py --dataset-path “/mnt/c/temp/deepstack/data” --model yolov5m”

I still have the same behavior when running mode on high (it starts but wont process any images). Running the same custom model without the --mode command works fine, and its also running the default –VISION-DETECTION in --MODE HIGH with no problems.

The machine is Server 2019 with a Nvidia P2000 graphics card.

So It’s the same computer I’m running Blue Iris on with GPU decoding on a bunch of cameras gobbling up a lot of GPU memory. Just as a test, i tried it with nothing else running on the GPU to make sure it wasn’t an out of memory issue and i still got the same result.