DeepStack OpenSource and Custom Object Detection Releases

Hello everyone. We have some exciting news to share today.

DeepStack is now open source on Github, johnolafenwa/DeepStack: The World’s Leading Cross Platform AI Engine for Edge Devices (github.com)

This is super exciting for us, and we are looking forward to developing DeepStack in partnership with you all for years to come.

** Custom Object Detection Support **

Beyond the 80 classes of objects DeepStack can detect, we now support training and deploying your custom object detection models on your own objects. This is fully supported across CPU, GPU and Jetson versions of DeepStack.

Visit Custom Models | DeepStack v1.0.1 documentation to end to end instructions on training and deploying your object detection model with DeepStack.

** New Documentation Site **

We have merged all the separate doc sites into one, https://docs.deepstack.cc

With this, we are going to be adding support for new languages soon, if you are interested in contributing to this, make a PR to johnolafenwa/deepstack-docs: Documentation for DeepStack (github.com)

For custom object detection support, install the latest versions of DeepStack

deepquestai/deepstack:cpu-2020.12
deepquestai/deepstack:gpu-2020.12
deepquestai/deepstack:jetpack-2020.12

** Looking Forward **

The windows version of DeepStack with support for CPU and GPU will be released before Christmas. The RPI version will likely defer to January as we stabilize the docker versions and the new windows version.

Core priorities for us this December is making the docs more awesome for you all, making the codebase easier for you to contribute to and making the custom object detection training as seamless as possible.

We look forward to your feedbacks.

Thank you

5 Likes

Hello @john,

You guys are awesome. We really appreciate what you guys are doing. Is there anyway to donate to this project?

I’m blown away with the new stuff you have been releasing every other day. Thank you so much!

Can’t wait for Deepstack Windows version with support for GPU. That would be an awesome Christmas gift for people who are using Blue Iris + Deepstack.

4 Likes

I have a homegrown solution on my Jetson, and thinking about moving to Deepstack.

Is the model compiled to tensorrt? Are images made square or is the model trained on rectangular shape? I.e, a 13x13 grid vs a 22x12? Can you run larger inference size than training size?

1 Like

Hello @sagarspatil. Thanks a lot. We really appreciate this. We don’t have donation setup at the moment. But if we do, we shall share it. In the meantime, you can always share the word and let more people know about DeepStack

Ofcourse @john. Thanks for taking the time to reply.

Hello @brianegge, you can send any size of image to DeepStack. DeepStack is agnostic to image formats and size. And on tensorrt, we abstract away everything needed to optimize the performance, so you don’t need to install anything else, just install deepstack and run.
Do give DeepStack a try and let us know how it works for you.

1 Like