So, after seeing the other post about adding custom models I decided to dive into that hole (The docs need to be updated as they don’t mention the docker flag that needs to be added to enable them). I was able to add two different ones, but when I try to use them it just loads forever and never returns a result. I don’t see anything in the docker logs for the container, and no spike in CPU use. Any thoughts on why this is happening?
While I don’t how of a way to get an actual version number, I just moved DeepStack to it’s own VM today, so it was the latest as of today. (I had the same issue on the old VM too)
Fire-Flame config: https://gist.github.com/vrelk/40058055591b61e13655cd9519053c22
There wasn’t much info on it, so I sorta guess at the config file.
Idenprof (professions) config: https://gist.github.com/vrelk/cd183bb10d1cd4c127baea7d2f830d6e
Source: your documentation, more of less. I added it using an h5 file since the link in your documentation is broken.
Source 2: https://github.com/OlafenwaMoses/IdenProf/releases