I was maxing out my ancient CPU (i7 960 from back in the day) trying to run Deepstack, with image processing times being multiple seconds. Not workable.
However, given the recent(ish) announcement of the CUDA toolkit being available in WSL 2, I now have a GPU based Deepstack Docker running. I can access this fine in a Browser, and put this URL in to an app that passes images to the server. However, I get this in the error log:
[14.08.2020, 20:28:49.612]: Starting analysis of C:\aiinput\frontsd.20200814_202849597.jpg [14.08.2020, 20:28:49.617]: (1/6) Uploading image to DeepQuestAI Server [14.08.2020, 20:28:49.633]: (2/6) Waiting for results [14.08.2020, 20:28:49.639]: (3/6) Processing results: [14.08.2020, 20:28:49.640]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 ) [14.08.2020, 20:28:49.676]: ERROR: Processing the following image 'C:\aiinput\frontsd.20200814_202849597.jpg' failed. Failure in AI Tool processing the image.
Any ideas? I’m thinking that the image is getting to the docker container fine, and being processed (below is the Docker log), but then not getting passed back to app. However, I am finding my way by feel…
[GIN] 2020/08/14 - 19:19:40 | 403 | 237.6µs | 172.20.0.1 | POST /v1/vision/detection [GIN] 2020/08/14 - 19:28:49 | 403 | 38.5µs | 172.20.0.1 | POST /v1/vision/detection
Any thoughts? I’m not sure if the 403 is referencing an http forbidden error, which is why it’s returning null? So it gets the request, and understands it, but says no you’re not allowed to make that request?
FYI - My docker in WSL isn’t the Docker Desktop version with WSL, it’s the version of Docker in my Ubuntu install so I could get all the nvidia gubbins…
I tested using the GPU on Docker with this:
tim@WinServer:~$ docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance. -fullscreen (run n-body simulation in fullscreen mode) -fp64 (use double precision floating point values for simulation) -hostmem (stores simulation data in host memory) -benchmark (run benchmark to measure performance) -numbodies=<N> (number of bodies (>= 1) to run in simulation) -device=<d> (where d=0,1,2.... for the CUDA device to use) -numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation) -compare (compares simulation results running once on the default GPU and once on the CPU) -cpu (run n-body simulation on the CPU) -tipsy=<file.bin> (load a tipsy model file for simulation) NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. > Windowed mode > Simulation data stored in video memory > Single precision floating point simulation > 1 Devices used for simulation GPU Device 0: "GeForce GT 1030" with compute capability 6.1 > Compute 6.1 CUDA device: [GeForce GT 1030] 3072 bodies, total time for 10 iterations: 2.987 ms = 31.594 billion interactions per second = 631.882 single-precision GFLOP/s at 20 flops per interaction
This is the Docker for Deepstack:
sudo docker run --restart=always --gpus all -e VISION-SCENE=True -v localstorage:/datastore -p 83:5000 --name deepstackgpu2 deepquestai/deepstack:gpu
Many thanks in advance for any guidance.