dropped frame in recording & live view with only 1 camera and 1-5% CPU

sirius682

n3wb
Feb 16, 2021
11
0
Canada
hi,

i'm new to blue iris and i tried the demo version to setup my first camera (RLC-820A latest firmware) following the Hook up video i have setup 2 stream in blue iris, 1 24/7 640x360@15fps recording and 1 AI motion detected 3840x2160@15fps. Both are setup direct to disk recording. In the live view i only watch the 640x360 feed and i get dropped frame, but i don't care about live view as nobody is watching it. What i care is the recording. both feed recorded are missing a lot of frame (i'm talking freezing from 2-15 sec) while the audio play fine the hole time on another desktop.

I tried recorded as BVR and MP4 no change. I tried registering the full version as some posted said that recording direct to disk as disable in the demo... no change.

Blue iris 5.3.9.9 x64

my server hardware:
CPU: Xeon W3550@3.07GHz
RAM: 12GB ddr3
OS:windows 10 pro
HDD (os): WD blue 320GB
HDD recording: WD red 4TB CMR
GPU:ATI FirePro 2260

my desktop temporary use as the blue iris server:
CPU: i5 3570k@3.6GHz
RAM: 32GB ddr3
HDD (os): WD blue 320GB
HDD recording: WD red 4TB CMR
GPU: GTX 680

wife's desktop (to download & playback file only using VLC)
i7 6700K
32GB ram
SSD
windows 10 pro

blue iris record to those path
database=320GB HDD
New=4TB HDD
Stored=4TB HDD
Alert=320GB HDD


the utilization of both system where:

CPU:1-5%
RAM:24%
HDD:0-1%
GPU:1-5%GTX 680 (i could not have the info the the firePro 2260)

i tried blue iris on both my server and my desktop, so it doesn't look like a hardware issue. When i delete 1 of the 2 feed the situation improve a lot, but not perfect. I though it was the camera struggling to send 2 feed, so i put the microSD card inside the Reolink RLC-820A to record in parellel both stream WHILE blue iris record both stream (at this point i deactivate motion detection for the 4k stream)... same result missing a hole lot of frame while audio play fine VS the microSD card doesn't miss any frame on both streams.


what i is configure wrong here as i doubt it's hardware if i look at the utilization. if you need a screenshot or something else don't hesitate. i hope that it's fixable otherwise blue iris would be useless...
 
You Are using a reolink POS camera. You problem is with the camera. Reolink cameras do not use standard configuration or exhibit standard behavior when recording. This is not a BI problem.
 
Yep, known issues with Reolinks and Blue Iris. Do a search. You cannot change the i-frame rate in a reolink and that is the major cause of the issue. You need to add a significant pre-buffer time in Blue Iris and hope it is enough time.

This guy was having Blue Iris miss a car in motion backing into his garage because of the Reolink camera...




If you can return the Reolink, do so. Useless at night...

Here is an example from their marketing videos - do you see a person in this picture...yes, there is a person in this picture. Could this provide anything useful for the police? The still picture looks great though... Will give you a hint - in between the two columns:


1613251115189.png


Bad Boys
Bad Boys
Watcha gonna do
Watcha gonna do
When the camera can't see you
 
thank you @SouthernYankee and @wittaj for your input, on other place i saw that adding the RTMP substream instead of the RTSP help a lot
Code:
rtmp:/[camers ip]/bcs/channel0_sub.bcs?channel=0&stream=0&user=[username]&password=[password]

now i get 15 fps on both stream. I add the other 10 cameras i have.

the recording are fine when both stream are set to continious recording. however, i don'T want to waste space so the AI Tool will trigger the main streams for all 11 cameras.
i set up on the main stream: record when triggered, disable motion detection, record direct to disk
cam setting.png

but when every camera is triggered (either as a motion detected via the substream AI or triggered manually without any motion) i get 100% CPU utilization of which BI is 90% on a i7 6700K. what is wrong here, because when all 4k are set to continuous and i move around the camera to load the ai tool i get only around 5-15%


the problem is that i get 0.5fps when the CPU is at 100% i can even see the networking dropping from a stready 100Mbps for the 11 cameras to 20mbps so i guess the cpu at this point is unable to pull the rtsp stream and drop frame. to be more precise, it is not in the beggining of the recording the problem, it's after 10 sec that it get really choppy. i figure the 10 sec without a single frame dropped is because of the 10 sec Pre-trigger video buffer.
 
Last edited:
According to that screenshot the substream isn't being pulled. Look on the general tab and it doesn't show a substream.

Go into Blue Iris Status and post the screenshot showing all the cameras and associated FPS and bitrates.

If you are pulling images for every camera for AI that will tax a system real fast. Even people with the latest intel chip are having the AI processing happening on a different computer. In addition to optimizing Blue Iris with the optimization wiki you also have to make sure you have configured your system to optimize it for the AI tools as seen in this thread for the AI Tool

If it is still possible to return those cameras, do so and get some cameras with AI capabilities in them as that is the surest way to keep CPU usage down. As you will see, the AI tool and Deepstack is great, but it comes at a significant CPU usage and still isn't up to par with cameras that have AI built in...

 
Using substreams significantly reduces CPU utilization. You will not bea ble to run 11 4K (8MP ) with that processor, without the correct configuration
Storage space cheap. A WD 4TB purple drive is less than $100, an 8TB is less than $200 US.
Your are going to also have problems using the Xeon processor as it is old and slow and does not support integrated graphics processing quicksync.

Are you doing motion detection in the camera or in BI ?

BI screen shots of the BI status
1) Camera tab
2) clip storage

======================================
Private ip addresses. Local IP addresses. These addresses are NOT used by the internet. They are for your local home/business network.
10.0.0.0 to 10.255.255.255
172.16.0.0 to 172.31.255.255
192.168.0.0 to 192.168.255.255

Note there is no reason to redact local ip addresses when posting.
 
Last edited:
And based on the links I provided in Post #3 above and personal experience, I really do not think I would trust those cameras to use in the AI tool despite what the HookUp says...especially at night when the iframe rates and shutter speed are manipulated to give a nice clear static picture and you cannot adjust them. No amount of AI processing is going to see a person in this picture:


1613251115189.png



Keep in mind that the Hookup is using the tool that a member here created (and you can actually see this forum in his video LOL) and if these cameras were all that, you would see people talking about them in that thread...plus you should be going by all the setup tips and tricks and troubleshooting that is posted in this thread below that includes the creator of AI Tools and not some YouTuber LOL)...

 
Last edited:
According to that screenshot the substream isn't being pulled. Look on the general tab and it doesn't show a substream.

Go into Blue Iris Status and post the screenshot showing all the cameras and associated FPS and bitrates.

If you are pulling images for every camera for AI that will tax a system real fast. Even people with the latest intel chip are having the AI processing happening on a different computer. In addition to optimizing Blue Iris with the optimization wiki you also have to make sure you have configured your system to optimize it for the AI tools as seen in this thread for the AI Tool

If it is still possible to return those cameras, do so and get some cameras with AI capabilities in them as that is the surest way to keep CPU usage down. As you will see, the AI tool and Deepstack is great, but it comes at a significant CPU usage and still isn't up to par with cameras that have AI built in...


@wittaj the substream is beeing pulled, but maybe i explained myself incorrectly.

i have 11 streams 640x360 and 11 streams 3840x2160.

the 640x360 have motion sensor enable and are set to record continious. when the low stream is triggered it send a picture to a watch folder "aiinput" which the AI decide if it will send a TRIGGER command to the high res stream. it will only send a picture to the AI every 7 sec

camera sd.png

the high res stream does not have motion sensor enable, but has record when trigger checked so basically i motion sens on the low res to trigger a high res recording clip

Using substreams significantly reduces CPU utilization. You will not bea ble to run 11 4K (8MP ) with that processor, without the correct configuration
Storage space cheap. A WD 4TB purple drive is less than $100, an 8TB is less than $200 US.
Are you doing motion detection in the camera or in BI ?

BI screen shots of the BI status
1) Camera tab
2) clip storage

here is the status page. by the way i have done every wiki optimization
status.png


if i wasn't clear in my post earlier today my system can record continious 11 4k streams + 11 360p streams without breaking a swet.
i just test again with DEEP stack closed and AItool close completely just trigerring the recording manually of the 4k cam (without any motion sensor on them) and it's only there that i see a spike in utilization.

does the trigerring API suck THAT MUCH cpu time. because this is the only variable that need to happen in order to have my cpu at 100% every other way to record those streams without having a trigger event goes smoothly. :rolleyes:
 
And based on the links I provided in Post #3 above and personal experience, I really do not think I would trust those cameras to use in the AI tool despite what the HookUp says...especially at night when the iframe rates and shutter speed are manipulated to give a nice clear static picture and you cannot adjust them. No amount of AI processing is going to see a person in this picture:


1613251115189.png



Keep in mind that the Hookup is using the tool that a member here created and if these cameras were all that, you would see people talking about them in that thread...


that i can live with because it will be in a well lit place and i will set manual setting on some of them for the shutter speed
 
Nope - substreams are not being pulled for the mainstream also as it is not showing a substream FPS or bitrate in your Blue Iris status page...You need to also have the substream on in the mainstream so that it doesn't peg the CPU so much. You need to set up the cameras then as clones for AI - you are pulling double the streams than you need to set up the way you did..

Look at your total MP/s usage at the bottom of that cameras tab - 1,406 MP/s - that is a ton of usage. From the Wiki - For loads greater than 1500 MP/s, all bets are off. You are at the verge of issues once motion starts and you are experiencing it, regardless of which cameras you have that will be an issue. I have way more cameras than you and a lot slower CPU than you and I am below 300MP/s.

Yes, AI pulls a ton of CPU processing.... get a real camera that has AI built in ...return these if you can or sell them and get real cameras with AI and be a happy camper!

Reolink camera in Blue Iris did not pick up this dudes car going in reverse into the garage in the daytime...do you have more light at night than the daytime of this video? Plus setting the shutter speed means nothing in those cameras as they play with the other parameters like iframe rate to make the still image look good, but then it misses motion - what do we want a nice clean image of nothing moving or a video of motion or in your case having it actually work with AI tools? You are overestimating the capabilities of those cameras and their ability to work with Blue Irs...







Unlike the hookup that gets paid/commission or something from camera manufacturers, and also a payment from YouTube for so many hits, the folks here do not get paid for any advice we give and folks here test these things hard. We are trying to help you work through these issues, but you have to also accept at some point that it is a known issue of these cameras with Blue Iris and then you add AI Tool on top of that and you now see what you are experiencing and you will have to decide to either live with these issues or change systems to something that actually gets you what you are looking for...
 
Last edited:
Nope - substreams are not being pulled for the mainstream also as it is not showing a substream FPS or bitrate in your Blue Iris status page...You need to also have the substream on in the mainstream so that it doesn't peg the CPU so much. You need to set up the cameras then as clones for AI - you are pulling double the streams than you need to set up the way you did..

Look at your total MP/s usage at the bottom of that cameras tab - 1,406 MP/s - that is a ton of usage. From the Wiki - For loads greater than 1500 MP/s, all bets are off. You are at the verge of issues once motion starts and you are experiencing it. I have way more cameras than you and a lot slower CPU than you and I am below 300MP/s.

Yes, AI pulls a ton of CPU processing.... from a camera that has AI built in is better if that option exists...return them if you can or sell them and get cameras with AI and be a happy camper!

Reolink camera in Blue Iris did not pick up this dudes car going in reverse into the garage in the daytime...do you have more light at night than the daytime of this video? Plus setting the shutter speed means nothing in those cameras as they play with the other parameters like iframe rate to make the still image look good, but then it misses motion - what do we want a nice clean image of nothing moving or a video of motion? You are overestimating the capabilities of those cameras...







Unlike the hookup that gets paid/commission or something from camera manufacturers, and also a payment from YouTube for so many hits, the folks here do not get paid for any advice we give and folks here test these things hard. We are trying to help you work through these issues, but you have to also accept at some point that it is a known issue of these cameras with Blue Iris and then you add AI Tool on top of that and you now see what you are experiencing and either live with issues or change systems to something that actually gets you what you are looking for...

i am not planning to use the built in ai in the reolink and the video you linked just confirm it is good to ignore any AI from reolink.


I don't understant your point on cloning cameras... i know that the 4k streams didn't switch to SubStream because i don't run motion detect on them it should not matter right?

i though that running motion detection on the clone (low res stream) would reduce the CPU compare to having a cloned camera which switch from low to high as soon as there is movement and then does motion detection on a 4k stream THAT in my mind pull a ton of CPU... can you explained to me how it would reduce my utilization having by cloning the camera and having both of them the main and substream?

i get that if will be way lower CPU when nothing is moving, but in my mind it doesn't really matter because it's not the bottleneck in my situation, it's when thing move that i need to save CPU cycle and doing motion detection (just the regular BI motion, not even AI) on 4k stream instead of 360p look to be worst... help me if i missed something in your explaination.

i tried to cloned 1 camera for the test, but the clone which did the motion detection always switch to 4k when there was a motion (assuming it was doing motion on 4k). So i manually change the cloned "main stream" to "subStream feed" but in status it now double the bandwitch
test.png


for the 1406MP/s the system is easily able to write that to disk without a swet... and unless i forgot something it is NOT doing motion on 1406MP/s because the 11 4k streams do not have motion sensor checked. it is doind the motion sensor on the 11stream 360p which equal to 0.23MP x15 fps = 3.45MP/s x 11 cam = 38.MP/s which look resonable if yours is doing motion on 300 MP/s with a slower CPU.


(for the Main event cam stream) it may be because it's a reolink, but when i use both stream in 1 cam feed it get SUPER lagy on that one compared to running 1 main RTSP and 1 Sub RTMP

and for the AI that pull so much CPU, unless you where talking BI's motion detection as "AI", it is 90% CPU BI and 10% "AI tool"on my PC.

again i don't get why blue iris would not break a swet on 4k stream that has no motion sensor enable on it and then as soon as i right clic on that cam and hit "trigger now" chug my CPU to hell. What the "trigger now" button is really doing, i'm really curious


looking forward to your answer. it's great to chat with people that really care about the surveillance world ;)
 
Yeah I don't know if I would trust the AI in the reo either LOL - but just curious - if the reos have AI then why not use that to trigger Blue Iris instead of using AI tools/Deepstack? My experience, as well as others, show that the camera AI is superior to Deepstack. Deepstack needs BI to take a picture and send to Deepstack and analyze it and return a response to BI, whereas the camera AI uses the video stream and can trigger BI directly. Using camera AI does not involve any additional CPU usage on the computer and actually brings down the CPU usage by Blue Iris since Blue Iris motion detection is not being used.

Did you see this from one of the threads I linked and my own personal experience - the true test....I have found the AI of the Dahua cameras to work even in a freakin blizzard....imagine how much the CPU would be maxing out sending all the snow pictures for analysis to Deepstack LOL. My non-AI cams in BI were triggering all night. This picture was ran through Deepstack (without the IVS or red lines on it) and it failed to recognize a person in the picture, but the camera AI did. The only triggers my AI cameras have are from human or car triggers and is doing so with a lot less CPU than sending pics to Deepstack. This pic says it all and the video had the red box over it even in complete white out on the screen:


1613268961041.png


Now look at your MP/s and how much it dropped when you added the substreams - you went from 1400MP/s down to 72MP/s.

You are mistaken on how BI uses the mainstream and substream and clones and then how to send to AI Tools. Substreams drop the CPU usage by only showing the substream unless you open up a single camera on the screen. So the entire time under your old setup, it was pulling the mainstream the entire time even if you are not motion triggering those feeds. That is why you were over 1400MP/s. When you added the substreams, it dropped to 72MP/s. Now when you select one camera, it will go up accordingly for that one camera by switching to mainstream. When using the substream option, motion is detected from the substream and not the mainstream, so you created 20 times the processing usage in your setup. Hmm...maybe that is why it spazzes out...

You also are not cloning properly as there shouldn't be a bitrate showing for the cloned camera. You need to designate one as the Clone Master. Then the clone will have an * by it and will not show a bitrate. Here is what it should look like - see how my clones are not showing a bitrate:

1614138572920.png

So the clone does not pull the videostream twice and thus the bitrate is only pulled once. Motion detection is triggered from the substream, so when you set up a clone, the motion detection is coming from the substream and then the clone is setup to take a picture from the mainstream that then sends it to AI Tools and if it matches your criteria, it then triggers the Clone Master (original camera) to then trigger and record mainstream. You have added a lot more complication to it setup the way you have and causing A LOT more CPU usage in this inefficient setup.

Now I will show you what your camera is doing. I suspect you have your cameras at 15FPS correct? Look at your FPS and i-frame (key) rates:

1614138658777.png

Even though you have set it for 15FPS, look at what the camera is doing - dropped some down to 3FPS but none are 15FPS. Now look at your key - that is the iframes. Blue Iris works best when the FPS and the iframes match. Now this is a ratio, so it should be a 1 if it matches the FPS. Your iframes not matching (that you cannot fix or change with a reolink) is why they miss motion in Blue Iris and why that video above missed that vehicle moving into the garage. Read up on iframes and how they cause motion to be missed. This is partly why your computer spazzes out with motion and triggers. And you have a lot of these cameras and I have cited many threads showing the issues people have with this manufacturer and Blue Iris.

The Blue Iris developer has indicated that for best reliability, sub stream frame rate should be equal to the main stream frame rate and yours do not do that and there is nothing you can do about that with these cameras... And the iframe rates should equal the FPS (something these cameras do not allow you to set), but at worse case be no more than double. Yours are at a keyrate of 0.24 means that the iframe rates are over 4 times the FPS and that is why motion is a disaster with these cameras and Blue Iris...A value of 0.5 or less is considered insufficient to use this feature...and many would argue that probably anything less than 1 is an issue with sending to a third party for analysis...

CPU usage goes up during motion triggering and I suspect having it running at 1400MP/s is what then caused the CPU to peg out with a motion trigger. Does it max out now that you have the MP/s down to sub 80MP/s? If so then you have another issue going on as well for Blue Iris to be pegging out at 90% during motion and I bet the cameras are contributing to it.

You said earlier you followed every optimization as called out for in the wiki and yet we see the most important optimization (the substreams) was not followed, so is there anything else in there you didn't follow or understand or misapplied how to set it up? If everything is correct there, then it is the cameras...

Or trying to do too many cameras with third party AI - Deepstack has a queuing process to try to help keep CPU usage down, so you may very well see Blue Iris CPU go down by using the substreams optimization but then AI Tools usage go up due to freed capacity and the end result is still pegging out the computer.

Now compare your above to mine and cameras that follow industry standards that allow you to actually set parameters and they don't manipulate them. You will see that my FPS match what I set in the camera, and the 1.00 key means the iframe matches:

1614139197822.png

I hope that we can get you working with these cameras and Blue Iris, but as I mentioned, due to the issues other members have cited and is well documented here, at some point you have to either accept this is the way it is and live with it or get rid of these cameras and get ones that follow industry standards... Maybe you can get by with one camera not following industry standards, but 10-11 like this and it is a recipe for the issues you are experiencing and will continue to experience even if you think it is working fine...I suspect triggering motion events will be missed...

Blue Iris is great and works with probably more camera brands than most VMS programs, but there are brands that don't work well or not at all - Rings, Arlos, Nest, Some Zmodo cams use proprietary systems and cannot be used with Blue Iris, and for a lot of people Reolink doesn't work well either.

BTW - has that hookup guy that you followed to set all this up been helping you out with the problems you are having.... :p
 
Last edited:
  • Like
Reactions: Wen
@wittaj, i would like to test something and i need other camera brand to do it so i will need your help.

Before that, i did a test comparing the different way to have the camera feed blue iris

1st
i setup the r125hd: RTSP:/main 4k and sub 360p (motion sensor OFF)
setup the r125: RTSP:/main 4k and sub 360p (motion sensor on)
when i hit "trigger now" on the r125hd i get a lot of skipped frame all the time during the 2 min clip and low CPU utilization marginal or no increase

2nd
i setup the r125hd RTSP:/main only (motion sensor OFF)
setup the r125 RTSP:/main & sub (motion sensor on)
when i hit "trigger now" on the r125hd i get basically no frame at all for 2 min and CPU is low



3rd
i setup the r125hd RTSP:/main 4k feed only (motion sensor OFF)
setup the r125 RTMP:/360p feed only (motion sensor on)

when i hit "trigger now" on the r125hd i get a perfect stream, but higher CPU utilization (around 13% per 4k stream triggered)
that until i run out of CPU and then either the CPU can't tell the disk to write or the CPU can't suck the feed, but i get 0.5FPS in recording(but sound is still beeing record mostly correctly)


all setup.png


so i guess with all the problem with reolink's RTSP feed it might be that that cause the problem when using both RTSP stream, but because i get barely anyframe for the 1st and 2nd test, i can't say if i will have the same problem with another brand in term of high CPU utilization. so i would really appreciate if you can do the following test for me (in a timely matter i have 5 days left to claim the 30 day money back and goind throug the mess of returning them.

can you the test with at least 4 4k cameras, up to 11 if you want:
setup the camera setting like the following:
"r125hd"
video tab: having both stream (RTSP main & sub)
trigger tab: motion sensor off & end trigger unless re-triggered within 120 sec
record tab: video "continuous"
alerts: fire when "never"

"r125"
video tab: having both stream (RTSP main & sub)
trigger tab: motion sensor ON & end trigger unless re-triggered within 120 sec
record tab: video "continuous"
alerts: fire when "never"

setup the other camera the same way, then on all HD feed hit "trigger now" and look at the CPU utilization and tell me if it has gone up and how much also just make sure that the streams are recorded without skipped frame (i don't care about live view at all as i will never watch live).
can you tell me the CPU model and RAM in your system plz?


That way i will have an apple to apple setting to compare 2 differents brand and what to expect from a proper IP cam that follow standard.

If the CPU utilization don't come up with all those trigger then i'll know that the reolink are doing some obscure thing when i hit "trigger now" in blue iris


for the price i think the annke c800 would be a good replacement for the Reolink RLC-820A (i saw that they have a setting (I frame interval) in their menu and they just are rebranded hikvision. let me know what your opinion about this camera, but please the discussion about the c800 can wait i really need the result of the test to make an inform decision about returning the Reolink of not i guess it will not be cheap to ship 11 cameras to reolink so i have to be sure what i get in before doing so.

i thank you in advance for your time:clap:, it's a big investment and i don't want to be stuck in this situation as the clock is going down.

PS: don't forget to save and export your congif before changing your camera feed for the test it will save you time after the test to get back on your setup ;)
 
Last edited:
First, don't chase megapixels. Chase 4K (8MP) on cameras that are problematic and so many of them and you will have issues.

At night the 1080 (2MP) cams will be your better bet. Just ask my neighbors with their 4k cameras that didn't provide the money shot to get their stolen belongings back from a thief in the middle of the night, yet my older 2MP camera did capture the money shot that ID'd the thief for the police to find and make an arrest and fortunately still had all the stolen stuff.. Meanwhile, the perp didn't come to my house but walked past on the sidewalk at 80 feet from my house and my 2MP varifocal zoomed in to a point at the sidewalk was the money shot for the police.

In fact my system was the only one that gave them useful information. Not even my other neighbors $1,300 4k Lorex system from Costco provided useful info - the cams just didn't cut it at night. His system wasn't even a year old and after that event he has started replacing with cameras purchased from @EMPIRETECANDY on this site based on my recommendation and seeing my results - fortunately those cams work with the Lorex NVR. He is still shocked a 2MP camera performs better than his 4k cameras... It is all about the amount of light needed and getting the right camera for the right location. I have 33,000 lumen radiating off my house and that isn't enough for the camera to stay in color because the sensors are so small in cameras - several of my cams I force into color at night just to get those details, but very few have enough light to truly run in color based on the optics of the camera deciding if it should run B/W or color.

It is also about taking the cameras off of auto settings like shutter. 4K can look great even at night with auto shutter - but that is a non-moving image that looks nice. Once you manual the shutter so that you do not get ghosting or blur, then folks see how difficult 4k really struggles at night.

The consumer or prosumer line of products are just not there yet to realistically put 8MP on a camera. It is simple - do not buy a 4MP camera that is anything other than a 1/1.8" sensor or larger. Do not buy a 2MP camera that is anything other than a 1/2.8" sensor. You have cameras that stuff an 8MP on a 1/2.5" sensor. Not good and then the camera needs more than quadruple the light compared to the lessor sensor. Simple physics.

So with that said, I do not run 4K cameras. I have some 4MP on the 1/1.8" sensor and a lot of 2MP on the 1/2.8" sensor. I can tell you how my system responds to your scenarios though because up until around May 2020 the substream optimization did not exist:

Scenario 1 - using both a main and substream on the same camera - I hit camera trigger and very little uptick in CPU since it is the substream that is triggering the motion. I do not see an uptick unless I open up one of the cameras to full screen and it goes to mainstream resolution. With your cameras under this scenario, look at the camera status and you will see your mainstream is showing around 3FPS but the substream is closer to 15FPS - this is why you see skipping of frames because the FPS and iframes are off but Blue Iris is trying to accomodate it.

Scenario 2 - using only mainstream - CPU usage was higher because it was using mainstream for everything. CPU usage would tick up due to motion being pulled from mainstream. With your cameras under this scenario, you are seeing lots of no frames when triggered because the camera is sending low FPS and iframes so Blue Iris doesn't know what to do with it and since you do not have the mainstream setup to record, triggering the mainstream is going to have that effect.

I am running 27 cameras (plus 12 clones) on an i7-4770 at sub 300Mp/s and 10% Blue Iris CPU and using every optimization tip in the wiki (and I could not run that many cameras until the substream option became available - but MP, bitrate, and FPS play significantly into that equation as well). I run third party AI application for two cameras and that adds 15% to the CPU usage during motion. Calm night and no motion and my CPU sits below 20%. During a blizzard when all but the cameras with AI built-in were being triggered all night, my CPU was running 60% due to almost every camera being triggered.
 
Last edited:
So with that said, I do not run 4K cameras. I have some 4MP on the 1/1.8" sensor and a lot of 2MP on the 1/2.8" sensor. I can tell you how my system responds to your scenarios though because up until around May 2020 the substream optimization did not exist:

Scenario 1 - using both a main and substream on the same camera - I hit camera trigger and very little uptick in CPU since it is the substream that is triggering the motion. I do not see an uptick unless I open up one of the cameras to full screen and it goes to mainstream resolution. With your cameras under this scenario, look at the camera status and you will see your mainstream is showing around 3FPS but the substream is closer to 15FPS - this is why you see skipping of frames because the FPS and iframes are off but Blue Iris is trying to accomodate it.

Scenario 2 - using only mainstream - CPU usage was higher because it was using mainstream for everything. CPU usage would tick up due to motion being pulled from mainstream. With your cameras under this scenario, you are seeing lots of no frames when triggered because the camera is sending low FPS and iframes so Blue Iris doesn't know what to do with it and since you do not have the mainstream setup to record, triggering the mainstream is going to have that effect.

I am running 27 cameras (plus 12 clones) on an i7-4770 at sub 300Mp/s and 10% Blue Iris CPU and using every optimization tip in the wiki (and I could not run that many cameras until the substream option became available - but MP, bitrate, and FPS play significantly into that equation as well). I run third party AI application for two cameras and that adds 15% to the CPU usage during motion. Calm night and no motion and my CPU sits below 20%. During a blizzard when all but the camers with AI built-in were being triggered all night, my CPU was running 60% due to almost every camera being triggered.

thanks for your answer so in the case of senario 1, which is the proper way to do it since may 2020, i should not have increase in cpu (BECAUSE of the action of triggering a camera). i get that motion and AI will take CPU, that's normal and i can live with it. if i understand correctly then i should not have any issue with (standard RTSP cameras) using this setup in term of CPU and missing frame in recording.

I guess then I will email Reolink today to start the process of the return... hope they will honor "30-Day Money Back Guarantee"
 
That would be correct - but since you are buying new cameras to begin with, simply purchase the cameras with AI processing already as they do not cost that much more and then not even need the third party...
 
That would be correct - but since you are buying new cameras to begin with, simply purchase the cameras with AI processing already as they do not cost that much more and then not even need the third party...

do you have some in mind?

i am planning in the future to start to mess with MQTT to send alert and also receive information from "home assistant" about the presence of owner or not so i can play with activating and deactivating the trigger event when working in the garage. Will that work the same way if it's the camera that does the AI and trigger itself?
 
Here is a good standard welcome post that another member uses that provides links to reviews and good/bad of several of the most used cameras.


The OEM Dahua cameras that @EMPIRETECANDY a member here sells direct and on Amazon are great cameras, as well as the OEM Hikvision and other Hikvisions being sold. The 5442 series of the Dahua cameras are one of the best matches right now of MP to sensor size, but the Hik also has a comparable camera. Those two brands will be the ones you will see most here talk about, but keep in mind that they are the manufacturer for lots of other brands. For example, Dahua makes the cameras for Lorex, Amcrest, QSee and a host of others. Hik does the same as well. You mentioned an Annke camera - many of them are made by Hikvision. So at the end of the day it comes down to personal choice. Some prefer the Dahua configuration page while others like the Hik page better and vice versa. But at the very least get the models of the ones folks here have tested and have shown the good and bad. Both companies make lesser quality cameras to cater to that crowd as well.

At the very least, I as well as others, recommend getting one varifocal camera and try it at the different locations you want to place a camera. Right after people chasing MP and wanting 4K cameras, the next biggest mistake is buying 2.8mm fixed lens cameras and using them in the wrong locations or situations. They have a place for overview cameras to see a wide area and be able to tell that something happened, but you will not be able to ID someone 50 feet away. The 2.8mm fixed lens works well if the person you want to identify is within 10-15 feet of the camera.

You need to pick the right camera for the location you are trying to capture images of. A 2.8mm lens is the wrong camera to identify people 50 feet away. A 32mm lens is the wrong camera trying to ID someone 5 feet away. Both cameras are great, but used in the wrong location results in not capturing what you want to identify.

Take a look at this chart - to identify someone with the 2.8mm lens popular someone would have to be within 13 feet of the camera.

1604638118196.png


My neighbor was bragging to me how he only needed his 4 cams to see his entire property and the street and his whole backyard. His car was sitting in the driveway practically touching the garage door and his video quality was useless to ID the perp not even 10 feet away (mainly because he also had cameras that provided zero ability to dial them in so the perp was a blurry motion mess). They provide a great overview but that is it unless it is in a confined area like a hallway or at the front door to Identify someone that knocks on the door and not to identify someone walking in the street.

Yes, MQTT will work with Blue Iris. But there are other things you can do as well. Blue Iris has a geofence option, so you could have it disable certain cameras when you are home. Or have it disable trigger alerts. Or trigger once and then not retrigger for a certain amount of time if you are in the garage. Or whatever option you choose. Or you can also use home automation to control or do other things. A lot of flexibility. In many instances you can do a search and find someone that has already figured out what you are trying to do and they explain in a thread how to go about setting it up. The AI Tools that you used is once such thread. There are also threads on reading license plates and home automation and lots of other things.