Deepstack error with windows and reolink cams

I have tried for days to get BI to work with deepstack. I have hours of reading forms and still can’t get it to work properly. Now I get a message in aitools say the transport connection was refused by my computer running it all. I have deepstack installed but never ran the command line at startup. I keep changing settings and only had it work one time to were ai tools found a clip that was empty but logged it. Not sure I have 30 yrs I. The I.T. Field but just can seem to get this. It says connection refused by your computer??
Any help would be appreciated here. I’m about to just give up on it.


I just installed it with Blue Iris. Amazing concept and I hope it lives up long-term to my initial impression.

Reolink and BI have problems working together. This stems from the fact that Reolink does not allow user control of bit rate, frame and iframe rates. BI relies on a ratio of 1:1 between frame and iframes for accurate moon detection. Reolink allows this ratio to drop too low to allow accurate motion detection especially at night when it drops to 1:.3 or lower. You can observe this effect in BI if you look at the camera statistics tab in the BI console. Watch the frame and key rates especially.

This is not really true, they just require a couple of settings changes.

They allow control of 2 out of 3 of those. You cannot control the key frame rate, and it is not at the ideal 1 key frame per second, but half that. But BlueIris can work just fine with that key frame rate.

I can’t speak to motion detection. I don’t use it. I pull images straight from the cameras and feed them to DeepStack directly instead of waiting for motion detection to hopefully trigger and feed images to DeepStack and then hopefully start recording.

But, as far as I know and have read in Blue Iris documentation, the reason for a 1fps key frame interval is that recording can only start on a key frame. So if your key frame interval were something ridiculous like every 10 seconds, you could only start a recording on that key frame and may lose video. When you are keeping 10-15 seconds of pre-trigger video buffer I do not think that is such an issue.

Also, I only think that what you said comes into play when you enable “Limit decoding unless required”. When you enable that option Blue Iris will only decode the key frames and do motion detection on those which in the case of Reolink cameras would be every other second.

As far as DeepStack integration goes, the documentation says that as long as your pre-trigger buffer is at least as long as key frame interval you should be good.

There is a lot of misinformation out there regarding Reolink cameras and Blue Iris. Almost all of this comes from the forums where the site owner has a real grudge against the cameras. I presume it is because he doesn’t know how to adjust/compensate for them in the software. If you want to get the word straight from the developer check out the online manual and Ctrl+F search for “key frame”, but a lot of the relevant information is in these paragraphs:

Direct-to-disc recording can only begin on a key frame boundary—if the rate is too low, this
means that video frames between a trigger event and the next key frame rate may be lost.
One way to compensate for this is to use pre-trigger time on the Record page.

When limit-decoding is being used, only key frames are decoded unless all video is required
for display or analysis. This means that only key frames are fed to the motion detector when
the camera is not triggered or selected for streaming or viewing. If the key frame rate is
much lower than 1.00, the motion detector may not operate effectively and events may be

good points from john and sebastian. i’ll add my own too…
I’d expect a lot of us are using BI motion detection to trigger image capture for deepstack use (not video). and therefore trigger and alert timing can be very critical, especially for say passing cars on a zoomed in LPR cam. as far as I tell, a 10s pre-trigger buffer is no help in this situation, and a 1.0 keyframe is essential, since a few frames can be the difference between getting a good alert image (or sequence of them) or getting nothing. even with a 1.0 keyframe, there can be challenges with BI motion detection depending on the environment/goal.

this is awesome, but with my old hardware (and my gpu not supported) cpu goes to 100% with deepstack processing even a single cam at 10fps. add in multiple cams, and it is simply not viable for a lot of old setups (or even some new ones where the gpu is not yet supported)

granted, Kyats issue appears to be nothing to do with motion detection timing. sorry i can’t actually help there!

Don’t process all images, and do not process them at full size.

I would recommend you use a bash script on Linux to pull the images, resize them, rotate if necessary and then save the output to a folder for processing. You could likely do the same on Windows as the following script uses imagemagick and it can be installed on Windows but I have NFC how to use imagemagick on Windows so YMMV.

If you are using something like node-deepstackai-trigger you would do something like:

cd /home/yourusername/aiinput
while true

start=$(date +%s.%N)

#Iterate through each camera, pull JPEG image from feed, save file with camera short name and timestamp
outfile=$ShortName.$(date +%Y%m%d_%H%M%S).jpg
echo $ShortName
convert "" -resize 800 /home/yourusername/aiinput/$outfile &

outfile=$ShortName.$(date +%Y%m%d_%H%M%S).jpg
echo $ShortName
convert "" -resize 800 /home/yourusername/aiinput/$outfile &

outfile=$ShortName.$(date +%Y%m%d_%H%M%S).jpg
echo $ShortName
convert "" -resize 800 /home/yourusername/aiinput/$outfile &

#Adjust sleep time below as necessary to get the desired time between batches of images
sleep 2s
#Purge all images older than 1 minute in age
find ./*.jpg -type f -mmin +2 -exec rm {} \;

duration=$(echo "$(date +%s.%N) - $start" | bc)
execution_time=`printf "%.2f seconds" $duration`

echo "Script Execution Time: $execution_time"


That is just an example, and it can be expanded as you wish. I find that if you have many cameras you may want to break that up into separate chunks with a wait to allow Deepstack to catch its breath. If you throw too many images at Deepstack at once it tends to choke. I imagine the pipeline needs improvement in that department. For instance, I found that if you have 50 cameras that you wanted to process images from, and in our case the image processing time was <20ms, we should be able to process images every two seconds with no worry. Which we can. But you have to break it up into 10-15 image chunks. If you throw 50 images at it all at once the image processing time grows exponentially and everything grinds to a halt.

Thanks John!
I’ll look into this more for future use. But unfortunately my inference times (~900ms) are at least 50x slower than yours, so those 50 cameras go down to 1 camera probably. But BI actually has some advantageous here as well, I can clone cams and run different motion detection zones (and therefore different burst timings, resolutions, etc) to catch fast cars (which only appear on the street, say 5 images at 10fps) or slow pedestrians (which only move slow on the street, or only appear on the sidewalk, say 10 images at 1fps) from the same physical camera and not overload my slow deepstack inference times. having all the motion configurations available in BI for the system, as well as additional artificial ones for a single cam, is pretty handy as well. not even considering the advantages of BI profiles which can drive completely different motion settings on a single trigger event.
i also do some cropping and resaving as well using python PIL, so your suggestions are not lost on me. Someday I hope to get a gpu that is supported

1 Like

Reolink did release a firmware recently that allowed for the iframe to be changed ( However it seems they pulled it from their site.

It starts out ok but then the fps and keyframe rate jumps all over the place and the camera randomly disconnects and reconnects.

I also got a lot of “Setup -22 for ‘hevc’ 3840x2160 00000140010c01ff ff21600000030000 0300000300000300 96ac090000014201 0121600000030000 0300000300000300 96a001e020021c7f 8aad3ba24bb20000 014401c072f08904 070e3648” error messages after updating to the new firmware.

BI Support gave me this link when I asked about the setup -22 errors:

Ahhhhhh, yeah, that makes a slight difference. :slight_smile:

I would strongly suggest picking up a GPU if you are capable. You’re talking about a performance increase of more than an order of magnitude.

someday… :slight_smile:
its a shame, i have 3 gpus just laying around, just none of them are supported, and i can’t justify the expense just for this project.

I have 2 BI and DeepStack installations working perfectly with all Reolink cameras (four different models)

I failed and failed until I found a set of settings on the IPCamTalk forum that worked perfectly for three of the four models. A small tweak got the other model working.

The motion detection works perfectly including at night. The Reolink cameras do have some low-light noise reduction options that can cause issues for good motion detection. They are options so they can be turned right off.

I should also mention that BI HATES dropped and out of order UDP packets. The Reolink client does an amazing job of keeping a fluid video stream when lots of data is being dropped. BI wants good data so you have to be really careful with the bandwidth settings when on Wi-Fi.

If you want some help, let me know what cameras you have.

PS: I only have DeepStack analyze when motion is detected and not every frame. BI essentially uses DeepStack not finding anything to clear an alert, which is much less burden on the CPU.


Having some issues with Reolink Cameras too… I have a set of 410s 511s and one 432… Could you please share your settings?

The 4/5xx series required a firmware update to the latest version and the following settings on the network IP page, as well as the latest BI release (August '21)
Make: Reolink
Model: *RLC-410, … Baseline RTMP
RTSP Port: 1935
ONVIF Port: 8000
Main Channel: /bcs/channel0_main.bcs?channel=0&stream=0&user={id}&password={pw}
Sub Channel: /bcs/channel0_sub.bcs?channel=0&stream=0&user={id}&password={pw}
Decoder compatibility mode checked
Send RTSP keep-alives checked
Use RTSP/Stream timecode checked.

The real key with this series of cameras was to use the RTMP vs. the RTSP streams.