Announcing DeepStack for Raspberry PI and All Arm64 Devices + New DeepStack CPU , GPU and Jetson Release

Today, we are excited to announce the docker version of DeepStack is now available on the Raspberry PI and all Arm64 Devices including Qualcomm Dragonboard 410C, AWS Graviton Servers etc.
This new release of DeepStack does not require any external hardware such as the Intel Movidius Stick.

All DeepStack features are fully supported including custom models. This release includes fixes for some bugs that has impacted the accuracy of custom models.

To run DeepStack on the Arm64 version of DeepStack on the RPI and other edge devices,
simply use the image


See the docs for more info
Using DeepStack with Arm64 Devices | DeepStack v1.2.1 documentation

While DeepStack will run even on 1GB Ram hardware, we recommend the Raspberry PI 4 with 8 GB ram for good performance.

Note that, you need to run Raspberry PI in 64 Bit mode as Raspbian is 32 Bit by default, you can enable 64 Bit mode following the the instructions in the doc linked above.

In addition to the Arm64 version, we are release new versions of the CPU, GPU and Jetson Version with bug fixes to improve the accuracy of the custom models.
Use images:


This is a stable release and we hope you find this helpful.
Thanks for your patience, please let us know your experiences and feedback.


With the gpu-2021.06.1 image I get the error
/app/server/server: /lib/x86_64-linux-gnu/ version `GLIBC_2.28’ not found (required by /app/server/server)
on Ubuntu 20.04.02 LTS.

The gpu-2021.02.1 image works for me though.

1 Like

Hello @36grad, thanks for reporting this. Please use

The issue has been fixed

Hi John, thanks for the update, works now like a charm.


I am trying to run this on a new Jetson Nano Developer Kit, the 4GB model with 4A supply and running sdk 4.51 and using graphical mode (for now).
I ran all APIs and on initially running it completely hangs the whole OS. Totally unresponsive. After about 20min it comes back and I got the following

0.73 5.84 14.22

After that it is running and responding. But only object detection seems to work, face detection is returning an error.

I am very new to this, not sure where to find logs to debug yet.

jason@nana0:~$ sudo docker run --runtime nvidia -e VISION-SCENE=True -e VISION-DETECTION=True -e VISION-FACE=True  -p 80:5000 deepquestai/deepstack:jetpack-2021.06.1
DeepStack: Version 2021.06.01
[GIN] 2021/06/03 - 21:21:45 | 200 |  434.616185ms | | POST     /v1/vision/detection
[GIN] 2021/06/03 - 21:22:45 | 500 |          1m0s | | POST     /v1/vision/face/recognize

I’ve done some benchmarking on the latest release:

Platform Speed (sec) Predictions
Pi4 with arm64-2021.06.1 15 50
Jetson xavier with jetpack-2021.06.1 5.1 50
Macbook Pro 2017 cpu-2021.06.1 6.5 63

After some research, I found my issue above is due to memory or lack thereof.

Using the default Gnome desktop consumes about 2G of memory, leaving only 2G left on the 4G model. Also, out of the box the SDK sets up only 2G of swap. I added another 8G of swap and was able to get the system to run and I found (running all interfaces) uses 3.5G of memory and 6.3G of swap (vastly more than it has out of the box).

It still takes about 10min to start up, in which time the Nano becomes quite unresponsive. Also the first detect might timeout, then it seems to get faster over the first few detects before settling down.

[GIN] 2021/06/06 - 13:06:47 | 500 |          1m1s | | POST     /v1/vision/detection
[GIN] 2021/06/06 - 13:08:05 | 200 |  4.677005578s | | POST     /v1/vision/detection
[GIN] 2021/06/06 - 13:08:53 | 200 |  3.052352849s | | POST     /v1/vision/detection
[GIN] 2021/06/06 - 13:08:56 | 200 |  179.615432ms | | POST     /v1/vision/detection

[GIN] 2021/06/06 - 13:11:24 | 500 |          1m2s | | POST     /v1/vision/face/
[GIN] 2021/06/06 - 13:12:14 | 200 |  9.426819052s | | POST     /v1/vision/face/
[GIN] 2021/06/06 - 13:12:28 | 200 |  4.161386018s | | POST     /v1/vision/face/
[GIN] 2021/06/06 - 13:12:30 | 200 |   305.42886ms | | POST     /v1/vision/face/
[GIN] 2021/06/06 - 13:12:33 | 200 |  174.691896ms | | POST     /v1/vision/face/

In conclusion, I recommend a note or two about this on the Jeston Install page. For newbies who just buy a board and try to run they need to increase their swap to have any chance.
And I am now thinking that its only sensible to abandon graphical mode to save that memory and just use a headless SSH session.

1 Like

As most people that have a Pi will already have an Intel Neural Compute stick 2 can this be used / incorporated to decrease processing times?

1 Like

Same request

1 Like

thanks u for reporting this.

Will the 2021-06-01 release be available as a Windows edition?

Are you using micro SD? Or SSD via USB3.0? swap can quickly wear out a flash card.

Yes I have an SD card. Now that I have disabled the x-windows GUI and just use it remotely it will swap less.

Any change a NCS2 version will be (re)made for the Pi?
I used the Alpha version successfully, but eventually let it slide in disuse because the boundary box identifications were waaaay off.




i’m running deepstack, using this image - deepquestai/deepstack:arm64
i found out that there is a zombie process

anubisg1@czsrc-lan-srv03:~$  ps aux | grep 'Z'
root        3483  0.0  0.0      0     0 ?        Z    Mar25   0:00 [redis-server] <defunct>
anubisg1 3376083  0.0  0.0   7692   672 pts/0    S+   11:07   0:00 grep --color=auto Z

anubisg1@czsrc-lan-srv03:~$ pstree -p -s 3483

redis-server is forked by server , and server is the entry point for deepstack.
i cannot find any thing in the logs, except for :

OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.