DeepStack Beta Release

Hello everyone,

I am happy to share that as we move towards releasing DeepStack as an high performance open source AI engine for edge devices, we have published two new images to address some of the issues that has been raised recently as to activation, no-avx support and accuracy of the object detection.

Kindly pull the latest beta images below.


Note that this new release contains a number of core enhancements with a highly improved object detection api. No activation is required and it should run fine on no-avx machines.

DeepStack will be duly open sourced in a couple of weeks.

Thanks everyone for your patience and incredible support. We look forward to continually developing DeepStack in partnership with you all.


Any news on updates to the raspberry pi version of deepstack utilising the Intel NCS?

I have noticed a couple issues in the reporting of object coordinates in the current version that I’d be keen to see rectified!

same question here…is it still alpha ?

Hello @Jaysbeekay and @vlitkowski, we are planning on releasing an embedded version of DeepStack as a docker image on the Nvidia Jetson nano. Support for a stable version of DeepStack on the pi is still on the radar, however we are considering the performance limits and might instead go all in on the jetson which comes at a better price when the cost of the ncs is added to that of the pi, while still offering better performance.

Would love to know your thoughts on this.

1 Like

I just invested on RPI4 + NCS and very happy with the solution…
Don’t you though that Stable version can be released for that ?

Is it possible to get the beta working on RPI4+NCS ?

Thanks for your work and help

that’s great news!
or any chance for a docker environment with NCS?

Hi @john

I’d love to see more investment into the Pi + NCS combo given:

  1. Raspberry Pi is such a ubiquitous platform with a lot of community and industry support
  2. Usage possibilities of the Raspberry pi are endless given the support and investment into ARM infrastructure
  3. Whilst the price point is marginally cheaper for a Jetson (at least here in Australia), I (like a lot of tinkerers out there) already have multiple Raspberry pi’s lying around so the addition of an NCS is not great a deal of an investment than buying
  4. Current performance of the Deepstack on Pi + NCS combo rivals my Intel i5 with sub second inference times!
  5. I have tested both tensorflite (on a raspberry pi with google coral) and Deepstack (on a raspberry pi with NCS) and can say that I personally have had better success with the Deepstack combo from ease of setup through to inference performance

So all up, would love to see more investment into the Pi version of Deepstack

This is great John, thank you. Hopefully custom model support will be available soon - in my case, the older models ere a bit better for my cctv use-case. With custom model support, I can use them again.

Thanks very much. I can now run the GPU version on my old server. Unfortunately, it does not seem to respond (or it’s taking forever). Has anyone been able to successfully run the GPU version on a no-avx system? Also, the detection time is much worse for me now. With the cpu-noavx version I get a detection time of 4-5 seconds. With the cpu-x3-beta version I get a detection time of 45-50 seconds. How can this be?

Had same issue initially, remove the API-KEY env var if you have it set before starting.

@vlitkowski @Jaysbeekay , thanks a lot for this very useful insights. We shall release a stable version of the Raspberry PI Version soon. The ecosystem is certainly more matured than the Jetson. Will keep everyone updated on progress on this.

Thanks @nuphor, we are working on optimizing the new models to be a lot faster and more accurate than the older one. The stable version should work much better. Custom Model support is coming soon .

Hi @gummybear, the cpu-x3-beta version is about two times slower than the current stable version. However, the speed you are getting is way too slow. What are your system specs?

The cpu is an intel Q6600 @ 2.4 GHz. The system has a total of 5 GB of RAM.
I’m running Deepstack in Docker. The OS is Debian 10.
When I run Deepstack on my windows machine (i7 4770k @ 4.2 GHz, 32 GB RAM), the detection time is around 0.5 seconds.

If I run: docker run --gpus all -e VISION-SCENE=True -v localstorage:/datastore -p 83:5000 deepquestai/deepstack:gpu-noavx
I get: Welcome to DeepStack. Visit localhost to activate DeepStack

If I run: docker run --gpus all -e VISION-SCENE=True -v localstorage:/datastore -p 83:5000 deepquestai/deepstack:gpu-x3-beta

I get: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: requirement error: unsatisfied condition: cuda>=10.1, please update your driver to a newer version, or use an earlier cuda container\\\\n\\\"\"": unknown. ERRO[0000] error waiting for container: context canceled

When I run the gpu-x3-beta in docker-compose, it seems to run but I get no response when I try to send an image for detection.

I have no idea what’s going on.

Same issue as me? @nuphor
I don’t have an API-KEY variable in my docker-compose file.

@john That’s great to hear!

very good news…
Any ETA ?

+1 for Raspberry pi version of your newest beta! I get 0.5 seconds detection time with pi 4 4gb plus stick. However, I’ve installed the new beta for testing on another machine and the accuracy of detection is way way way better than the alpha on the Pi. So if you could release the same beast for the Pi, that would be a complete game changer.

Any eta for that in mind?

1 Like

Hi @gummybear, the issue is with your cuda version. Note that if you don’t have cuda installed on your system, DeepStack will use it’s own version of cuda and you won’t have any issue. However, if you have cuda installed on your system and it is less than version 10.1, this error will occur.

Kindly share your cuda version, you can run nvidia-smi on your system to find the version of cuda on your system.