MHO, and YMMV, but in general you want the smarts in your computer, not in your devices. Your devices have been engineered to have tiny little brains, so they can be sold at the lowest possible price. This means performance and upgradeability will always be limited.
The brains in a PC can be tailored to your needs and upgraded or replaced as necessary.
Furthermore, if you buy N cameras with AI, you're paying for N brains. But the odds are that not all N brains will need to be active and computing at full tilt all at the same time. If you buy N cameras without AI and one computer, then you only need to pay for enough brains for the N streams that need simultaneous analysis. This is the same math that drives putting the smarts in the cloud, since now one giant brain can literally serve thousands of devices, whether those be cameras, robot vacuums, or whatever, but a lot of people (especially here!) don't trust the cloud, so we like to build our own NVRs out of commodity PCs.
Of course if you have 1 camera, the economics favor AI in the camera. But as the number of cameras grow, the economics quickly favor buying the brains in the PC.
The other math around here is that, if you think you will have 3-4 cameras, you will soon have 10-12.
I generally agree, but type of camera, use case, and field of view need to be taken into consideration as well.
If someone has a lot of cameras and tries to use DeepStack on every one of them, the computer requirements can start to get extensive and/or delays in notifications.
Using the camera CPU for AI instead of BI motion detection then means that the BI computer CPU usage can drop a little and not spike trying to do AI. YMMV - probably more of an impact on an older unit than a newer unit.
Plus some series of cameras, like the Dahua 5442 series, only come with AI built-in, so there is not a savings to be had purchasing a non-AI 5442 camera.
Knock on wood, for my camera field of views, these AI check boxes are spot on in all of my cameras that have them! Has made scrubbing alerts in BI a breeze because even with how great motion detection is in BI, there are a few situations where I cannot knock out false triggers, especially at night with headlights bouncing off a hill for example. Trying to eliminate that and then I miss a real trigger. The camera AI doesn't even flinch at attempting to think the headlight bounce is a trigger, or motion lights turning on. Using DeepStack for those cameras would cause CPU/GPU spikes all night.
Again, a lot depends on the location and field of view and speed of objects at nighttime. Vehicles at night can be problematic for camera AI if the field of view is too tight because the camera needs to be able to identify the object, assess if it is a vehicle, and then trigger the camera, so there are instances where that might be an issue based on the specifics of what the camera is looking at. But DeepStack catches all of these as it is looking at images. I have tested this with my cameras as well.
And because I have a few "dumb" cameras without AI lol (so I have to use BI motion detection) and have some overlap with those and AI cams, I have been able to confirm every false and true trigger from those dumb cams were accurately triggered or not triggered in my camera with AI. To the point that I cannot see myself buying a new camera without that. YMMV. Just for redundancy, I still run a few cams with BI motion detection just in case an AI camera didn't pick something up. Plus I run 24/7 so I can always go back.
And even with camera AI, for a few of my cameras, I then supplement that camera with DeepStack in BI.
From my own personal experience - the true test....I have found the AI of the Dahua cameras to work even in a freakin blizzard....imagine how much the CPU would be maxing out sending all the snow pictures for analysis to Deepstack LOL. It is night rain or snow or insects with infrared that can max out a CPU doing DeepStack analysis on all the cameras.
My non-AI cams in BI were triggering all night. This event was also being ran through Deepstack and it failed to recognize a person in the picture, but the camera AI did in my 5442. The only triggers my AI cameras have are from human or car triggers and is doing so with a lot less CPU than sending pics to Deepstack. This pic says it all and the video had the red box over it even in complete white out on the screen:
I am using using DeepStack in concert with my AI cams for a few situations, and while some of that third party stuff is cool like tagging was it a dog or a bear, I don't need all that fancy stuff (now for LPR, yes I am using the
tools created in those threads). If my camera triggers BI to tag an alert for human or vehicle and BI can accomplish what I need by way of a text or email or whatever, that is sufficient for my needs.
As always, YMMV.