4+ Second delay between Motion A>B & Deepstack AI object despite Testing & Tuning shows immediate?

bignose2

n3wb
Joined
Nov 11, 2015
Messages
21
Reaction score
0
Fast windows 10 PC, 7 cameras but CPU use v.low.
Dahua camera, 30fps. No sub stream set up on BI (have just disabled in camera also, not tested yet but assume as BI has no Sub Stream config cannot image an issue)

Basically I track cars coming into my driveway.
It is a little slow to open the gates, really cannot see why should not be less than 1 seconds, is 5 or 6+ (not bad but just slightly annoying if could be faster)

Movement A>B & then if object is a car 40% (deepstack) run some ANPR python code, don't think that's relevant at this point (works well anyway) but appreciate prob. adds a second or two to the overall speed.

If I actually replay the video, testing & tuning, I can see the exact point it triggers A>B, I then replay using AI & again can see its detects as a car before the trigger & throughout so to my mind should alert almost instantly.

Looking at the log, I can see movement A>B but approx. 4+ seconds later, Object Car.

Is this just the way it is or can I do something to improve.

1701961903915.png

Thanks I/A
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,175
Reaction score
49,060
Location
USA
Fast Windows10 PC is different for everyone. What i# and gen number is it?

30FPS using mainstream only is a waste of resources and likely part of the issue. Movies for the big screen are shot at 24FPS - do you really need faster than that?

I noticed that my LPR camera wouldn't trigger in every instance (I knew this because the overview cam triggered and the LPR camera didn't)

So I noticed while watching it live for an extended period that for my license plate camera (which as you know is zoomed in tight to the road to read plates), I watched it not trigger for a big ole yellow school bus, but then trigger for a tiny 2-door car the next minute that was driving slower and then miss the same car coming back 5 minutes later!

For this plate camera, I was obviously running a fast shutter to capture plates, but also had the FPS at 30 FPS thinking that would be better. When I knocked it down to 10 FPS, Blue Iris motion started capturing that bus and other vehicles it was missing and triggering faster. I think the motion algorithm for a tight field of view was having difficulty with the faster FPS as there wasn't as much of a difference comparing frame to frame at 30FPS to 10FPS. A vehicle is in and out of my LPR field of view in under 0.5 seconds and I now get trigger alerts and capture every plate at 8FPS (yes I dropped it even further for longer retention of LPR images).


Do you know that CPAI downrezes the photo to run thru the AI, so you have created an inherent lag there just by using mainstream only. If you ran substreams everything will perform better.


Keep in mind that the "Analyze with Deepstack/CodeProject" under "Testing & Tuning" will ALWAYS perform better than live as it is after the fact and should not be used as an analysis tool to try to figure out why it didn't see and trigger for a car or person or when it triggered. It should only be used to see what AI can find in that clip, like "hmm I wonder if AI can find a toothbrush" and then walk around with a toothbrush and have it identify it. I can run this on a camera not using AI and it will show EVERYTHING that AI has in its objects to find that it sees in the clip. This method will show you EVERY ITEM AI searches for.

You need to review the .dat files as that will show you how CodeProject interpreted it.
 

bignose2

n3wb
Joined
Nov 11, 2015
Messages
21
Reaction score
0
Intel Core i7 4790 @ 3.60GHz 78 °C
Haswell 22nm Technology.
16gb

Thanks for your reply.

I read quite a lot that substream was not a good idea so discounted this quite early on.

My requirements are not too demanding, its as a single car approaches & has to stop at a gate so less than 10mph to a stand still. LPR is 100%

Again 30fps seemed a little excessive but seemed to cope well & I thought especially is poor light or rain etc. the more frames it got the better.
I know it is a waste as what I actually do is, on movement A>B it saves approx, 22 JPGS's (1/3 seconds intervals), then if AI confirms a vehicle, runs my python code that goes through these images in order, sending to Plate Recognizer SDK, if recognizes on my while list, opens gate, sends telegram message, if not known continues to loop thought until the highest LPR % (as long as a high enough % to be a likely true plate) then just sends the message, X is at the gate.
This code runs extremely fast as have had multiple timers when I was testing.


I do wonder as you suggest, it is actually taking longer for Deep Stack AI to confirm it is a vehicle & not representative of me viewing during testing & tuning but little surprised. I only have AI on this one camera, would understand if a seconds or two but this is often 5 or 6.

What I might try is a custom so just after cars, I think I am using standard.

However the image below, from the DAT, shows the AI Car exactly where I expect it to be.

I know the code is fast, I wonder if the start of its execution from BI is delayed, tricky to get the timer info. to test from BI & my code but think down to me to investigate.
Oddly sometimes it is almost instant so not sure that can be the case but perhaps if all the other cameras are recording the PC is struggling.


1702036874621.png
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,175
Reaction score
49,060
Location
USA
While I also have a 4th gen, that is not what most here would call a fast computer LOL.

HTF here says substreams is a bad idea? It has been the only way NVRs have been able to survive for years and was a game changer when BI implemented it that allows computers like a 4th gen to even be usable after more than a few cameras.

If substreams were bad, nobody here would be using them.

Running any additional code such as your ANPR adds to the CPU consumption and will slow this down.

With a 4th gen, you need to do everything you can to lower the CPU usage. Doing everything in the optimization wiki is a must on a computer that old.

Like I said the test and tuning option is not indicative of what the computer will do in real time.

4,000ms is what I would expect on a 4th gen not running substreams, etc. Especially with anything else going on with the computer at that time. Remember in real time, it has to take your mainstream and downrez it and then go in the Deepstack queue - all of that adds time.

I run a 4th gen with lots of cameras all using substreams and I am in the sub 500ms on most instances. But I have done every optimization in the wiki as well.
 

bignose2

n3wb
Joined
Nov 11, 2015
Messages
21
Reaction score
0
Hi,

Thanks for your follow up, been experimenting, now with ProjectCode AI, perhaps should have stuck with DS to fully test but seemed like sensible to make the effort if DS is being depreciated.

Anyway, using substream now (I assume as unticked use main if available), cameras on 15fps also.

Is so difficult to test properly as have to get into car & drive out & in each time as cannot rely on testing & tuning (as you suggest), T&T on motion & AI implies (looking at the clock timing) it was detected & AI acted upon immediately whereas in reality that 4 second delay, at least.

I am going to research & read through again but thinking out loud a little here.

Double checked my python code & it is fast start to finish, max 1 second. I prefer to use my code & the plate recognizer SDK so at the moment ignoring the ProjectAI LPR ability.

One thing I cannot work out is, still slow to AI, 3329ms & slower still to open the gate and as I say know this is fast, less than 1 seconds on python code.
Is the AI going through all of my 20'ish images anyway, despite finding "car" v.early.

I hoped if no items to cancel on, it would Alert on first detection... hence my need to read up a little more.
If it always waits until all images are AI will take some time. During the day I could cut the number of +real time images as car detection easy but at dusk/dark or when headlights are on the motion zone A>B can be well before the actual car is is true position, the shining light triggers this, as below, A>B is actually right next to the bright light.


1702335510114.png






1702336069558.png
 

actran

Getting comfortable
Joined
May 8, 2016
Messages
806
Reaction score
732
@bignose2 Your Intel Core i7 4790 was released in 2014. Sadly, it's pretty slow compared to today's standards.

You can improve CP.AI detection speed if you do:
#1: Switch from default object detection to a custom model like ipcam-general
#2: Reduce # of images from 22 to something lower like 6.
#3: If you are running CP.AI on your current CPU, you can get better speeds with Nvidia graphics card.

What BI version are you on?

I'm running CP.AI on Nvidia. You can see the performance below:cp.ai.png
 
Last edited:
Top