Who uses Dahua AI capable cameras? Reliable AI for triggering events? Pro's/con's?

Firstly, I am new to Dahua. I purchased a couple of their 5442-based cameras due to input received on this forum. In Blue Iris, I can see how to 'tune' the Triggers/Alerts by using the Test Motion Detector and Test With DeepStack features. What are the settings to capture IVS triggering information so that I can see exactly when IVS is triggering and what it's using to trigger?

Reason I ask is, at the far range of the camera's view, I can tune DeepStack to trigger when a person first steps into view. So far IVS seems to require the person to be considerably closer to the camera. The difference being almost 10 seconds. I want to tune IVS to be able to trigger more quickly.
Google is your friend.
 
Just tonight I discovered a trigger was missed completely. Attached are the IVS settings along with a short video of the guy walking right through the Tripwire and Intrusion settings.

This is not the scenario I was trying to solve in my earlier message. That was aimed at people walking the other direction .... from the far end of the alley toward the camera. But since this was the first known entire missed trigger, I wanted to post it and ask for thoughts. In BI with DeepStack, it would have triggered and alerted within the first 2 steps of coming into view.

View attachment 96803View attachment 96804View attachment 96805
View attachment 96807

















Be sure you have set this up:
IVS/Global Setup - Dahua Wiki
 
You need to make the IVS rules larger and think about 3-dimensional.

I would extend the zig zag and the Intrusion box to the top of the fence on the left.

That person's lower body was the only thing within the IVS triggers and his dark pants matched the dark of the ground, but his head is at the top of the fence, so it was missed. Extend the zones to where his head is and you should start getting more.

View attachment 96839

Also, are you on default settings or have you dialed this in to your area? Did you do the Global Config? It seems like there should be more contrast and I would expect IR to be brighter at that close proximity.


I had configured IVS Global, but went back today and completely reset them - see attached. The verticals are 72 in (1.83m) and the horizontal is 18 ft (5.5m). One possible mistake is if the horizontal needs to be the same size as the verticals. Instructions were not clear on that point. At these settings, the calibration testing produces greatly inaccurate results, so I must have made a mistake somewhere.

As for the IR, there may be a couple factors in play. First is, there is an illuminator mounted on the beam which is intended to flood the area about 10 to 15 feet away from the camera. There are also a few solar motion LED lights which did come on, but they are not very bright. In terms of settings in the camera, I followed your guidance from other threads, as you can see in the 2 screenshots attached.

Comments/suggestions welcomed.


ExpSettings02.JPGExpSettings01.JPGIVSGlobal.JPG
 
Ok those look pretty good.

I have only done the global config with the same size object both vertical and horizontal. I used a yardstick. I wouldn't think that would make a difference since you set the length of the object, but who knows for sure until it is tested.

I still think it is the IVS rules are not large enough to trigger and you need to get the whole person within the IVS. If that doesn't work, then try turning smart IR off and change IR to manual and set near and far at 100 and see if that catches them. If so, then we can back down IR to eliminate a hotspot.
 
Ok those look pretty good.

I have only done the global config with the same size object both vertical and horizontal. I used a yardstick. I wouldn't think that would make a difference since you set the length of the object, but who knows for sure until it is tested.

I still think it is the IVS rules are not large enough to trigger and you need to get the whole person within the IVS. If that doesn't work, then try turning smart IR off and change IR to manual and set near and far at 100 and see if that catches them. If so, then we can back down IR to eliminate a hotspot.

Couple of things:

In IVS Global, I did measure a new length to change the horizontal to the same length as the verticals. It made no difference in the wildly inaccurate results I get when doing the Calibration Validation testing. I'll keep playing with it.

I also did, as you suggested, and made changes to the iVS Tripwire and Intrusion placements, plus added a couple more Tripwires. Attached is a screenshot of the Trigger/Alert I received a short time ago. :)

IVSCat.JPG

And another received just now....

IVSCat02.JPG

The Tripwire and Intrusion settings which alerted are both set to filter for "Human" objects.
 
Last edited:
Sometimes less is more. Try one intrusion box with appears and crosses selected. Sometimes the field of view requires two or the tripwire also. All of mine are one intrusion box except for one camera I am using 3 tripwires because it is being used as a spotter camera for the PTZ so I only want it spotting people going in one direction.

OK, a few false positives are better than completely missing an object LOL.

When using Dahua AI, the suggestion is to try min object size as 0,0 (which I think you have?) but if something like a cat or dog is tricking it, then add a minimum size larger than the object that is triggering yet, yet small enough that you get human.
 
Sometimes less is more. Try one intrusion box with appears and crosses selected. Sometimes the field of view requires two or the tripwire also. All of mine are one intrusion box except for one camera I am using 3 tripwires because it is being used as a spotter camera for the PTZ so I only want it spotting people going in one direction.

OK, a few false positives are better than completely missing an object LOL.

When using Dahua AI, the suggestion is to try min object size as 0,0 (which I think you have?) but if something like a cat or dog is tricking it, then add a minimum size larger than the object that is triggering yet, yet small enough that you get human.

I thought about the minimum object size. Problem is, if I use a minimum size that is slightly larger than the cats at 10 ft out, it will also affect the triggers downrange and I'll get late triggers on Human intrusions coming toward the camera (back to my original post).

Recognizing there is no perfect solution, merely a set of compromises ......

I'll try your suggestion of eliminating the Tripwires and see what happens.
 
I just got a Dahua camera with AI (IPC-HDW3849H-AS-PV). I never made it a priority when shopping for cameras. I was using Deep Stack with really good success. But after reading several members posts I decided to try it. I made a clone of the camera (to compare BI+DS vs BI+DS+Dahua AI) and changed it over to IVS with Human and Car AI. I turned off Blue Iris's motion detection and set it to using ONVIF triggers. I get (so far) perfect triggers for car and people on my street. Deep Stack+BI still processes the images and labels them nicely for me. Granted I was getting near perfect motion detection and AI classifications using the latest community combined.pt model (except for my dog, long story). But as close as I have got to perfection, heavy rain can cause BI to trigger its ass off. That many triggers can cause the system to miss real events due to chasing heavy rain drops. I could turn off DS in BI if I wanted to. The camera's AI motion and object detection would do all the work and BI would only record as to the ONVIF triggers. I think I will keep DS and Dahua AI working together for the time being. They are working so well together. There is a real synergy with BI+DS+Dahua AI cameras. And this is all while the camera is sitting on a box. The camera literally came out of the box and was then but on said box. I only updated the firmware. I did not even get to the IVS/Global Setup. The camera just worked even though it had really horrible angles to deal with. I was shocked that IVS Intrusion/IVS Tripwire AI worked at all on cars racing by my street. I will be mounting it over my driveway tomorrow for hopefully even better results. If the camera's motion detection AI works as well as others are saying, (blizzards rain, etc.) then all my future cameras will include Dahua AI.

I really love the IPcamtalk community. I have learned so much from it!

Thanks guys!!!
 
I just got a Dahua camera with AI (IPC-HDW3849H-AS-PV). I never made it a priority when shopping for cameras. I was using Deep Stack with really good success. But after reading several members posts I decided to try it. I made a clone of the camera (to compare BI+DS vs BI+DS+Dahua AI) and changed it over to IVS with Human and Car AI. I turned off Blue Iris's motion detection and set it to using ONVIF triggers. I get (so far) perfect triggers for car and people on my street. Deep Stack+BI still processes the images and labels them nicely for me. Granted I was getting near perfect motion detection and AI classifications using the latest community combined.pt model (except for my dog, long story). But as close as I have got to perfection, heavy rain can cause BI to trigger its ass off. That many triggers can cause the system to miss real events due to chasing heavy rain drops. I could turn off DS in BI if I wanted to. The camera's AI motion and object detection would do all the work and BI would only record as to the ONVIF triggers. I think I will keep DS and Dahua AI working together for the time being. They are working so well together. There is a real synergy with BI+DS+Dahua AI cameras. And this is all while the camera is sitting on a box. The camera literally came out of the box and was then but on said box. I only updated the firmware. I did not even get to the IVS/Global Setup. The camera just worked even though it had really horrible angles to deal with. I was shocked that IVS Intrusion/IVS Tripwire AI worked at all on cars racing by my street. I will be mounting it over my driveway tomorrow for hopefully even better results. If the camera's motion detection AI works as well as others are saying, (blizzards rain, etc.) then all my future cameras will include Dahua AI.

I really love the IPcamtalk community. I have learned so much from it!

Thanks guys!!!
Since being introduced to Dahua AI cameras, I left Deepstack behind. These AI cameras capture 100% of the alerts I want (HUMANS). However, because the AI has to work off 2 dimensional space, it will misfire on rain drops hitting the lens that could be construed as a HUMAN 20' distant. It's just how it is until we get 3D holographic AI cameras in the year 2032. With that being said, Deepstack is wonderful for non-AI cameras. I was getting fantastic HUMAN percentages. However, using Deepstack on too many cameras at once during a snow/rain storm (I have 6 cameras on the front of my house) will start to bog down the CPU & Blue Iris causing other issues. Hence, Dahua AI was my answer.
 
Since being introduced to Dahua AI cameras, I left Deepstack behind. These AI cameras capture 100% of the alerts I want (HUMANS). However, because the AI has to work off 2 dimensional space, it will misfire on rain drops hitting the lens that could be construed as a HUMAN 20' distant. It's just how it is until we get 3D holographic AI cameras in the year 2032. With that being said, Deepstack is wonderful for non-AI cameras. I was getting fantastic HUMAN percentages. However, using Deepstack on too many cameras at once during a snow/rain storm (I have 6 cameras on the front of my house) will start to bog down the CPU & Blue Iris causing other issues. Hence, Dahua AI was my answer.

I thought the same about DS until I added a GTX 1060 to it. Using general.pt (or even the larger combined) model with DS (high, using 640 Sub streams) I have seen detection times dip into the 30ms territory at times. Normal is still fast in the 40s to 50s. s. I tried running it on my main streams. But I found that it was noticeably slower, and I actually got slightly worse confidence and over all detections. Granted it was small, say 91% compared to 92 or 93. So mainstream is slower, and at best has equal to or slightly worse than subs. This is due to DS drops the resolution of the images it processes to 640x640. The reason mainstreams are slower is because DS is busy cutting the images from whatever resolution to 640! My GTX 1060 was only being used by DS about 10 to 15% of the GPU. DS is far from optimized at using the CUDA cores. So, I have BI start 4 instances of DS. With 4 DS servers running I spaced out what cameras hit which IP/server. So, no camera has to wait too long to get time with DS. BI and DS has made big strides in AI this last year. Now BI needs to load balance which camera gets to which server to really make it hum along! Is all that enough to get Deep Stack up to Dahua AI level? I do not know, but I will soon. I know DS was getting nearly 99+% detections. I have 4 cameras along my house that sees the street. If 3 cameras report a truck drove (or person, etc) by then I know one camera failed (or the truck turned into a driveway and parked). That makes it easy to check if I had any failures. A year ago, I was so frustrated with DS. I was missing 20% of objects at night running the DS CPU version. My system was working so hard to run DS. The answer was an expensive (should have been really cheap) old GPU. DS was still not great, missing some at night. I started running dark.pt with the normal DS object model at night. I got really close to near perfect detection... it was MikeLud1's models
MikeLud1 that both speed up and had better detection rates that really made me happy with DS. I got rid of the object model that came with DS and dark all together. Still not perfect. Rain seems to cause BI problems. And it isn't really heave rain. It is heavy misting and wind blowing it around that that freaks BI out and then causes DS to fail. Also, DS thinks my dog is a bird, sometimes a cat. It seems to be a problem with the model and my dog. I get good detection on the neighborhood dogs though.

I must thank you and many others that I have been following their posts over the last year. I has not been very visible in the forums, but I was behind the scenes reading all the posts, soaking up yours and many other members knowledge.

Sorry for the rambling posts. I get that way when I am tired so please forgive me! LOL
 
I thought the same about DS until I added a GTX 1060 to it. Using general.pt (or even the larger combined) model with DS (high, using 640 Sub streams) I have seen detection times dip into the 30ms territory at times. Normal is still fast in the 40s to 50s. s. I tried running it on my main streams. But I found that it was noticeably slower, and I actually got slightly worse confidence and over all detections. Granted it was small, say 91% compared to 92 or 93. So mainstream is slower, and at best has equal to or slightly worse than subs. This is due to DS drops the resolution of the images it processes to 640x640. The reason mainstreams are slower is because DS is busy cutting the images from whatever resolution to 640! My GTX 1060 was only being used by DS about 10 to 15% of the GPU. DS is far from optimized at using the CUDA cores. So, I have BI start 4 instances of DS. With 4 DS servers running I spaced out what cameras hit which IP/server. So, no camera has to wait too long to get time with DS. BI and DS has made big strides in AI this last year. Now BI needs to load balance which camera gets to which server to really make it hum along! Is all that enough to get Deep Stack up to Dahua AI level? I do not know, but I will soon. I know DS was getting nearly 99+% detections. I have 4 cameras along my house that sees the street. If 3 cameras report a truck drove (or person, etc) by then I know one camera failed (or the truck turned into a driveway and parked). That makes it easy to check if I had any failures. A year ago, I was so frustrated with DS. I was missing 20% of objects at night running the DS CPU version. My system was working so hard to run DS. The answer was an expensive (should have been really cheap) old GPU. DS was still not great, missing some at night. I started running dark.pt with the normal DS object model at night. I got really close to near perfect detection... it was MikeLud1's models
MikeLud1 that both speed up and had better detection rates that really made me happy with DS. I got rid of the object model that came with DS and dark all together. Still not perfect. Rain seems to cause BI problems. And it isn't really heave rain. It is heavy misting and wind blowing it around that that freaks BI out and then causes DS to fail. Also, DS thinks my dog is a bird, sometimes a cat. It seems to be a problem with the model and my dog. I get good detection on the neighborhood dogs though.

I must thank you and many others that I have been following their posts over the last year. I has not been very visible in the forums, but I was behind the scenes reading all the posts, soaking up yours and many other members knowledge.

Sorry for the rambling posts. I get that way when I am tired so please forgive me! LOL
I just remind myself that any computerized AI is just that...computerized. I am not ready for Skynet :) So there will me misfires or false alarms here & there. Whether Dahua AI or Deepstack or any other AI for home use.
 
I just remind myself that any computerized AI is just that...computerized. I am not ready for Skynet :) So there will me misfires or false alarms here & there. Whether Dahua AI or Deepstack or any other AI for home use.

I am impressed with the accuracy from the onboard AI but with regards to BI recording the triggers it seems to vary wildly. What I mean is when you click the image to load the video of the event BI+ DS usually takes you to the begins of the trigger. With the cam AI Blue Iris seems to randomly pick where to take you on clicking it. Sometimes the car is just coming into view, sometimes it is just leaving the cameras field of view. While other times it shows nothing. I have tried using BI and cam AI alone. I've tried it with BI+DS to tag/label the motion object that was triggered. That at least tells me it was a car with 89% for example. But again BI and DS's clip that it found sometimes does not show anything, like what I am seeing when I actually view the clips. I have to manually wind it back to see the trigger. It captures the trigger every time though. I tried every permutation and combination of IVS with car/person/intrusion/trip wire/appears/enters/etc but I get the same results. So, I then tested SMD with human and motor vehicle. I made motion zones for just the areas I was interested in and told it to look for people/cars in those areas. I get the same thing in Blue Iris. I must be screwing something up configuration wise in BI. Or does BI just normally act this way with ONVIF triggers in general? BI does not act this way when it controls the triggers. I played with pre-trigger buffers, but I have come to the conclusion the buffers only work with BI motion sensing. I wish it would allow me to choose to start showing me clips say 1.2 seconds into the triggered recordings or something like that. If I had to check for a car that drove by my house in the last 24 hours and had to click on hundreds of AI motion triggered clips and then had to manually go back and forward to find the actual triggered object (car/person) I would lose what little sanity I have left. Now imagine you were trying to find a trigger event that happened sometime during the last week... OMG. This setup of BI simply recording AI triggers from the camera is bordering on useless. No one would want to do that. I would rather go by a Dahua NVR with AI than this. Money is tight for me as is with all the stuff going on in the world today. Hell, I would rather buy a Reolink NVR with AI and Reolink cameras than this... ok, maybe I would not go that far! Haha I must be missing something. Sure, it captures and records nearly 99.999% of motion objects triggers correctly but conversely it lacks nearly 99.999% user friendliness to go along with it. Sure I could make mutltiple cloans to help with finding events from the AI camera, but then I am having to make up things to fix this "fix". On a side note, BI+DS last night got all the cars and people also. I was hoping this setup would take me to another level in motion object detection accuracy. Instead, it takes me to another level of making band aids in BI to overcome a shortcoming of this setup. IE, make a cloan and have it do X and Y, etc. How is this any better than the standard BI? I am running one of the latest BI versions, if not the latest. Maybe being on the bleeding edge of BI technology has finally come back to bite me? Do I need to go back to an older more stable version? If so, what version are you using? Or am I just missing the bigger picture and I am worrying about a non-issue? Look i have done it again... made a long meandering post. I have a massive headache. I need to take two ibuprofens and have a nap. Any ideas, recommendations?
 
I am impressed with the accuracy from the onboard AI but with regards to BI recording the triggers it seems to vary wildly. What I mean is when you click the image to load the video of the event BI+ DS usually takes you to the begins of the trigger. With the cam AI Blue Iris seems to randomly pick where to take you on clicking it. Sometimes the car is just coming into view, sometimes it is just leaving the cameras field of view. While other times it shows nothing. I have tried using BI and cam AI alone. I've tried it with BI+DS to tag/label the motion object that was triggered. That at least tells me it was a car with 89% for example. But again BI and DS's clip that it found sometimes does not show anything, like what I am seeing when I actually view the clips. I have to manually wind it back to see the trigger. It captures the trigger every time though. I tried every permutation and combination of IVS with car/person/intrusion/trip wire/appears/enters/etc but I get the same results. So, I then tested SMD with human and motor vehicle. I made motion zones for just the areas I was interested in and told it to look for people/cars in those areas. I get the same thing in Blue Iris. I must be screwing something up configuration wise in BI. Or does BI just normally act this way with ONVIF triggers in general? BI does not act this way when it controls the triggers. I played with pre-trigger buffers, but I have come to the conclusion the buffers only work with BI motion sensing. I wish it would allow me to choose to start showing me clips say 1.2 seconds into the triggered recordings or something like that. If I had to check for a car that drove by my house in the last 24 hours and had to click on hundreds of AI motion triggered clips and then had to manually go back and forward to find the actual triggered object (car/person) I would lose what little sanity I have left. Now imagine you were trying to find a trigger event that happened sometime during the last week... OMG. This setup of BI simply recording AI triggers from the camera is bordering on useless. No one would want to do that. I would rather go by a Dahua NVR with AI than this. Money is tight for me as is with all the stuff going on in the world today. Hell, I would rather buy a Reolink NVR with AI and Reolink cameras than this... ok, maybe I would not go that far! Haha I must be missing something. Sure, it captures and records nearly 99.999% of motion objects triggers correctly but conversely it lacks nearly 99.999% user friendliness to go along with it. Sure I could make mutltiple cloans to help with finding events from the AI camera, but then I am having to make up things to fix this "fix". On a side note, BI+DS last night got all the cars and people also. I was hoping this setup would take me to another level in motion object detection accuracy. Instead, it takes me to another level of making band aids in BI to overcome a shortcoming of this setup. IE, make a cloan and have it do X and Y, etc. How is this any better than the standard BI? I am running one of the latest BI versions, if not the latest. Maybe being on the bleeding edge of BI technology has finally come back to bite me? Do I need to go back to an older more stable version? If so, what version are you using? Or am I just missing the bigger picture and I am worrying about a non-issue? Look i have done it again... made a long meandering post. I have a massive headache. I need to take two ibuprofens and have a nap. Any ideas, recommendations?
This is most likely all user error. I don't use clones and don't have the issue's you are.
There is a learning curve to all of this, make sure you have read the BI help file, and checked out this for help.
Most of us here do not have the issue's you are experiencing.
Baby steps.
See here:
Blue Iris Support - YouTube
 
  • Like
Reactions: alastairstevenson
I've found that ONVIF triggers, AI, don't work in a clone camera worth a crap. There is something weird with the way BI handles ONVIF/clone. I have the main instance of my cameras set to INVIF triggers and use BI motion detection in the clones. Works for me, but YMMV.
 
This is most likely all user error. I don't use clones and don't have the issue's you are.
There is a learning curve to all of this, make sure you have read the BI help file, and checked out this for help.
Most of us here do not have the issue's you are experiencing.
Baby steps.
See here:
Blue Iris Support - YouTube

I have DS working with BI just fine. So, I know what BI is supposed to do... but I haven't used external Ai ONVIF triggering before. It is not behaving anywhere close to what I am used to. I asked to advise on what I should try next. Not to "Works for me!" posts. Obviously, it works for most of you otherwise you wouldn't use it. So that was not helpful in the least.
 
I've found that ONVIF triggers, AI, don't work in a clone camera worth a crap. There is something weird with the way BI handles ONVIF/clone. I have the main instance of my cameras set to INVIF triggers and use BI motion detection in the clones. Works for me, but YMMV.

That is how I have it setup up also. Actually, I think it was from reading your posts that I set it up that way. They have been doing a lot of DAT file/database changes in the last several version updates. I think I am off to try some older versions to see if it makes any difference,
 
  • Like
Reactions: sebastiantombs
FWIW I migrated some Dahua cams from using BI motion detection + Deepstack to the camera's own IVS. I too have lost the pre-trigger buffer that was present when using BI motion detection + Deepstack. Now, when reviewing footage the alert clip (triggered by ONVIF) starts exactly at the point where the trigger occurs, rather than several seconds before as configured in BI. I haven't spent a great deal of time investigating the cause yet.
 
  • Like
Reactions: jrbeddow
FWIW I migrated some Dahua cams from using BI motion detection + Deepstack to the camera's own IVS. I too have lost the pre-trigger buffer that was present when using BI motion detection + Deepstack. Now, when reviewing footage the alert clip (triggered by ONVIF) starts exactly at the point where the trigger occurs, rather than several seconds before as configured in BI. I haven't spent a great deal of time investigating the cause yet.

I think you explained it better than I did. I have a clone on the camera that is setup with BI+DS, the clone master does the camera ONVIF/external motion triggers. The only thing different between the two are the trigger sources. The BI+DS works perfect, pretrigger buffers and all. The cam trigger one does not. I will be contacting BI support to see if this is by design, or a bug, etc. I seem to remember reading somewhere from BI support that the pretrigger buffers only works with BI motion detection. It was a while back when I read that so I could be wrong. I will update what I find out.
 
  • Like
Reactions: jrbeddow