GeForce RTX 3060 Ti GPU can't work

I installed the docker version 2021.06.2. The server can start,but can’t process ai task. The Gpu in my pc is GeForce RTX 3060 Ti.
The log error:

/usr/local/lib/python3.7/dist-packages/torch/cuda/ UserWarning:
GeForce RTX 3060 Ti with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75.
If you want to use the GeForce RTX 3060 Ti GPU with PyTorch, please check the instructions at

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Process Process-1:
Traceback (most recent call last):
File “/usr/lib/python3.7/multiprocessing/”, line 297, in _bootstrap
File “/usr/lib/python3.7/multiprocessing/”, line 99, in run
self._target(*self._args, **self._kwargs)
File “/app/intelligencelayer/shared/”, line 69, in objectdetection
detector = YOLODetector(model_path, reso, cuda=CUDA_MODE)
File “/app/intelligencelayer/shared/./”, line 36, in init
self.model = attempt_load(model_path, map_location=self.device)
File “/app/intelligencelayer/shared/./models/”, line 159, in attempt_load
torch.load(w, map_location=map_location)[“model”].float().fuse().eval()
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 485, in float
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 354, in _apply
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 354, in _apply
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 354, in _apply
[Previous line repeated 1 more time]
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 376, in _apply
param_applied = fn(param)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 485, in
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
RuntimeError: CUDA error: no kernel image is available for execution on the device

1 Like

a bunch of us have this issue with pytorch not supporting GPUs that are either too old or ‘too new’. not sure if there are workarounds yet? or whether rebuilding from source using a different version would solve this?

Are there any updates on this? I am having the same issue with 2021.09.1 trying to use my 3090

I was able to get my 3060 to work with DeepStack

*CUDA Toolkit 11.4
*cuDNN v8.2.4

Manually update the below Windows Packages

  • numpy-1.21.2-cp37-cp37m-win_amd64
  • Pillow-8.3.2-cp37-cp37m-win_amd64
  • scipy-1.7.1-cp37-cp37m-win_amd64
  • torch-1.9.1+cu111-cp37-cp37m-win_amd64
  • torchvision-0.10.1+cu111-cp37-cp37m-win_amd64

I can confirm, above stops works

Does anyone have a workaround for a docker installation? It would be super nice to actually utilize my 3060ti for face recognition.

I manually updated base and GPU docker images with new versions of CUDA, torch and torchvision and works fine.