Thanks for all the feedbacks and support. We are happy to announce the latest updates to DeepStack.
We have reviewed the feedbacks and requests and have addressed most of them in the latest 3.4 release.
Here are the core changes.
Speed and Performance
The speed of the detection and face APIs including recognition and face detection have been increased by 2X on the cpu version.
The GPU version has also been improved to better utilize the GPU for high volume requests.
The CPU overuse issues have also been resolved and DeepStack no longer uses much CPU when idle.
No AVX Support
We have added support for older systems with no avx optimization enabled.
On the DeepStack’s docker page https://hub.docker.com/r/deepquestai/deepstack/tags
you will find noavx tags with support for older systems. Note however that the noavx tags are not as fast as the standard tags.
Universal Custom Model Support
DeepStack now supports deploying custom ONNX, Keras and Tensorflow image recognition models.
With the ONNX support, you can train an image recognition model in almost any deep learning framework including Pytorch, MxNet, CNTK etc and deploy them for high speed inference using DeepStack.
See the custom models page for more info.
Note also that we plan to release support for custom object detection in coming weeks as well as a number of open source tools to facilititate training of your own custom models for deployment with deepstack.
We have reviewed the pricing for the premium version and for a limited time, the premium can be purchased for only 9$.
Thank you so much for always for your patience and for helping us improve DeepStack with your requests and feedbacks.
Support for non-docker platforms, NCS acceleration and support for embedded devices are highly requested features that we shall be addressing in coming weeks.
We have so much more coming and we look forward to the great things you will build with DeepStack.