Deepstack Jetpack broken

I’ve got an nvidia jetson nano 4gb that I have the latest version of jetpack (which is Ubuntu 18.04.6 LTS) for blueiris AI detection… It was working great up until I did an apt update and apt upgrade… not sure what broke/changed… I’m a complete noob so I apologize for the lack of info. I think something changed with nvidia-container-toolkit or nvidia-container-runtime? maybe? Docker Version is 20.10.7

Command I’ve used to run deepstack:jetpack

alex@jetson:~$ sudo docker run --runtime nvidia -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack-2021.09.1

Result:

docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall clone3: permission denied: unknown.
ERRO[0001] error waiting for container: context canceled

I’m happy to provide you with any and all information you want, but you might have to tell me how to get it for you…

I also completely wiped out the sdcard, reinstalled jetpack image (updated and upgraded again), and still am getting this error when trying to run deepstack jetpack… I guess I should probably try it without updating and upgrading, but I’d like to be on the latest version of everything if possible.

It WAS working great, but I just HAD to go and apt update/upgrade and now I don’t know how to fix it :confused:

Is anyone else also experiencing this? I’d imagine it’s not just me…

edit: I was able to resolve this by removing " --runtime nvidia " from the docker command. Now it looks like it’s running under “runc” runtime, but it works. Seems to be about the same amount of inference time too. I also added “-d --gpus all” to the docker command as well, I tried running it without and it hung, would not start all the way. Hope this helps someone else

I also had the same issue recently. I can’t find the original documentation but I ran:
docker run --gpus all nvidia/cuda:10.0-base nvidia-smi
and then started deepstack after and it worked.

From memory, the container enables other docker containers use GPU acceleration. After running the command, there is no running nvidia/cuda container or anything. I only had to run it the one time.

I hope this works for you too.