Cannot get my custom models working

I have tried both the CPU and the GPU image for custom models in Docker, but I can’t get it to work.
Keep getting the *Detecting Endpoint not activated’ error from Agent DVR.
From the Deepstack log:
2022/06/01 - 17:42:23 | 403 | 26.6µs | 172.17.0.1 | POST “/v1/vision/detection”

Is there another than "/v1/vision/detection"endpoint to connect to when it comes to custom models?
Here is my docker run syntax:
docker run --gpus all -v /Documents/modeller:/modelstore/detection -p 82:5000 deepquestai/deepstack:gpu

@Rogerbl Thanks for posting this. When you deploy a custom model with DeepStack, your model’s endpoint should show in the logs as http://localhost:82/v1/vision/custom/<name_of_model_file_without_extension>

E.g http://localhost:82/v1/vision/custom/catsanddogs

And make sure the path /Documents/modeller is an absolute path.

1 Like

I got an error in the Agent DVR log:

00:15:00 DeepStack Alert Filter: DeepStack Alert Filter: Error converting value 404 to type ‘CoreLogic.AI.DeepStackBase+ObjectResponse’. Path ‘’, line 1, position 3.
00:15:00 Error: DeepStack Alert Filter: DeepStack Alert Filter: Error converting value 404 to type ‘CoreLogic.AI.DeepStackBase+ObjectResponse’. Path ‘’, line 1, position 3.

When I issue this command:
docker run --gpus all -v /modeller/general:/modelstore/detection -p 80:5000 deepquestai/deepstack:gpu

This is from the deepstack log:
DeepStack: Version 2022.01.01

v1/backup

v1/restore

As you can see the endpoint does not show up in the log.
Can it be the reason for failiure here?

@Rogerbl Can you try the CPU version if it works? There is a chance your graphics might be having issues with the 2022.01.01 GPU version of DeepStack, and this is being addressed in this discussion.

GPU 2022.01.1 - no objects detected - Docker - DeepStack Forum

1 Like