I get this error:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 5.81 GiB total capacity; 315.77 MiB already allocated; 9.62 MiB free; 318.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation
the GPU is a GTX 1660 Super with 6GB RAM. the error shows that i have about 5.5GB free and cant allocate 20MB. is this a bug of some sort? I am super new to this so let me know if you need more infomation.
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 5780 C python3 1493MiB |
±----------------------------------------------------------------------------+