5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

060 and the 2060 are in the $200
Also curious about this, for example buy a 3060 with 12GB ram or a 3060ti with 6GB

Ram amount seeems very big with Code Sense

didn't you see my task manager image above on a GTX 970 with 4GB using only 1.2GB VRAM? - Memory is only going to be an issue with loads of AI modules and/or custom models loading. Most BI users are only going to need 1-3 of them running - The 3060 will be fine, only get the Ti if you have other use case (gaming) beyond BI AI.
 
Hey guys
Let's not get carried away regarding GPU power. Presently many Nvidia GPU's are simply incompatible due to the software being in the early stages of development. I suspect in due course this issue will be resolved and so long as you are running a CUDA based GPU with enough memory it will suffice for most modest installations of up to 10 cameras.

The brings us to memory requirements. I'm operating 8 HD cameras 2 of which are 12 MP and the GPU is a 2060. Even with significant activity the GPU usage is never more than 10 – 15 percent. Memory on the other hand can reach the 6GB limit of the card when I send high resolution snaps for analysis which happens if you choose to use 'switch to HD stream if available '. The development team believe the jump from a baseline of 4GB to 6GB might be a software error only time will tell.

So, IMHO unless you have a significant number of cameras, and are planning for the future most CUDA based GPU's 'given compatibility issues being ironed out' that have 320 plus cores will probably return fair results.

One thing to bear in mind is that the bigger the GPU the higher your standby power consumption. My modest 2060 is already constantly gobbling 17 watts and is clearly way too powerful for my present needs and I will probably swap it for an older 1060 6GB card I have.

Caveats to the above, Face recognition may eventually become a reality (read that as reliable) and will most probably consume more GPU resource and needless to say all bets are off if you want to play games.

The above are just my rambling thoughts after reading posts of members considering much more powerful GPU’s for CodeProject AI
 

Attachments

  • 2060.jpg
    2060.jpg
    82.4 KB · Views: 39
  • Like
Reactions: actran
Hey guys
Let's not get carried away regarding GPU power. Presently many Nvidia GPU's are simply incompatible due to the software being in the early stages of development. I suspect in due course this issue will be resolved and so long as you are running a CUDA based GPU with enough memory it will suffice for most modest installations of up to 10 cameras.

The brings us to memory requirements. I'm operating 8 HD cameras 2 of which are 12 MP and the GPU is a 2060. Even with significant activity the GPU usage is never more than 10 – 15 percent. Memory on the other hand can reach the 6GB limit of the card when I send high resolution snaps for analysis which happens if you choose to use 'switch to HD stream if available '. The development team believe the jump from a baseline of 4GB to 6GB might be a software error only time will tell.

So, IMHO unless you have a significant number of cameras, and are planning for the future most CUDA based GPU's 'given compatibility issues being ironed out' that have 320 plus cores will probably return fair results.

One thing to bear in mind is that the bigger the GPU the higher your standby power consumption. My modest 2060 is already constantly gobbling 17 watts and is clearly way too powerful for my present needs and I will probably swap it for an older 1060 6GB card I have.

Caveats to the above, Face recognition may eventually become a reality (read that as reliable) and will most probably consume more GPU resource and needless to say all bets are off if you want to play games.

The above are just my rambling thoughts after reading posts of members considering much more powerful GPU’s for CodeProject AI
17watts isn't much in the scheme of things, my 3060ti seems to draw around the same ~19w (I think each of my CCTV cams consume 12-14watts each). My GPU is about 200watts when mining eth. Yeah for about a year I was doing both and deepstack, both happily co-existed.
 
@Village Guy - have you tried changing the fan curve setting so you only fire up the GPU fans at about 60 degrees - Even on a hot day my GPU stays below 60c without the fan on.
No, I did look for an adjustment setting but couldn't find one so assumed I would need a third party app. I installed the studio driver and the tuning features appeared to be non existent.

Where can I find the fan setting?
 
No, I did look for an adjustment setting but couldn't find one so assumed I would need a third party app. I installed the studio driver and the tuning features appeared to be non existent.

Where can I find the fan setting?

MSI Afterburner


don't worry that it's by MSI it works for all Nvidia cards (I have it working on EVGA, MSI and ASUS cards) works great to set clocks, power and fans. Everyone with an Nvida card should have this installed.
 
Being from the school of "more is better" and always anticipating that more horsepower, raw processing as well as memory, will be needed as things advance I think it's prudent to use a card as powerful as you can afford or find at a reasonable price. "Reasonable price" is a budget based variable however, All the NVidia cards I have fooled with 970, 1060, 2070 and 3070, seem to draw around 15 watts, + -, at idle. It's only under load that you see differences. The 1060 seemed to be the most efficient between the 970 and 1060. The 1060 ran at about 130 watts under load while the 970 runs at about 150 watts. Both the 2070 and 3070 run around 200 watts under load, but the 3070 can accomplish more "work" making it more efficient than the 2070.
 
  • Like
Reactions: woolfman72
MSI Afterburner


don't worry that it's by MSI it works for all Nvidia cards (I have it working on EVGA, MSI and ASUS cards) works great to set clocks, power and fans. Everyone with an Nvida card should have this installed.
Perfect thanks for the tip.
 
  • Like
Reactions: sebastiantombs
BI 5.6.0.8. CodeProject.AI v1.5.6-beta face detection looks promising with my setup.

But is there a way to tell BI to limit face detection to only specific cameras?

For cameras where I do not want face detection, I tried the AI trigger config in screenshot here.

It doesn't seems to work because when I look at , I still see Face Processing count increasing when those cameras are triggered.

turn off face detection.png
 
Assuming "face" is a custom model and you don't want to detect face on that camera, it needs to be commented as "face:0" in the custom models box, not the objects box.
 
@sebastiantombs Good suggestion. Given your advice, I realized my config was not correct. Here is new config below. I still see face processing count go up though...
Face is not a custom model so you don't need to include it in the custom models box. You just need to enable the facial recognition option within the AI tab in your BI settings. Also in the "To confirm" box you need to include the face names that you want to recognize.

Screen Shot 2022-09-06 at 12.18.42 PM.png

Screen Shot 2022-09-06 at 12.19.25 PM.png
 
  • Like
Reactions: sebastiantombs
Face is not a custom model so you don't need to include it in the custom models box. You just need to enable the facial recognition option within the AI tab in your BI settings. Also in the "To confirm" box you need to include the face names that you want to recognize.

@Vettester I've already enabled face recognition (I agree with the screenshots you shared). I indicated this in my post above. The problem is turning off face recognition on per camera basis.
 
@woolfman72 Hmm....

For a given camera, if I don't want CoreProject.AI to expend resources on face recognition, is "To cancel" intended solve that?

Or is "to cancel" more about canceling alerting after getting AI predictions.
 
If a face name is not included in the "To confirm" box facial recognition is not applied to that camera.
Are you really sure? I ask because if I leave the 'To Confirm ' box blank with Face Recognition on, BI will post a captured face in the unknown folder if there is no match with a face on file. In addition my .dat file analysis will report no faces found if it is unable find one.

Up until now I have assumed that you cannot turn face processing off on a per camera basis.
 
  • Like
Reactions: actran
Are you really sure? I ask because if I leave the 'To Confirm ' box blank with Face Recognition on, BI will post a captured face in the unknown folder if there is no match with a face on file. In addition my .dat file analysis will report no faces found if it is unable find one.

Up until now I have assumed that you cannot turn face processing off on a per camera basis.
You can turn face processing off on a per camera basis.
1662554709968.png
 
@MikeLud1 That is some great detail. I guess my original question still remains, if I only have "ipcam-general" specified as custom model, why do I see Face Processing counter increment up in CodeProjectAI admin screen when this cam is triggered?

Given your explanation, I assume face detection would be skipped in this case, right?
 
Last edited: