Odd Cuda memory error within Docker

I get this error:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 5.81 GiB total capacity; 315.77 MiB already allocated; 9.62 MiB free; 318.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation

the GPU is a GTX 1660 Super with 6GB RAM. the error shows that i have about 5.5GB free and cant allocate 20MB. is this a bug of some sort? I am super new to this so let me know if you need more infomation.

nvidia-smi shows 1.4GB used

±----------------------------------------------------------------------------+
| NVIDIA-SMI 470.63.01 Driver Version: 470.63.01 CUDA Version: 11.4 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:0E:00.0 Off | N/A |
| 38% 43C P8 13W / 125W | 1496MiB / 5944MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 5780 C python3 1493MiB |
±----------------------------------------------------------------------------+

@theskaz Sorry to hear to you are experiencing this. Kindly try the following steps below

  • If you are using a Windows system, can you check the compute allocated to Docker Desktop if there isn’t any cap on the GPU access?
  • What API(s) were you trying to activate when you run DeepStack?
  • If you are trying to activate multiple APIs but just need one, can you run DeepStack with only that API enabled?