5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Mike, there's been lots of correspondence about model accuracy. I'm getting good results with your ipcam-xxx and would like it to be as good as possible. Hence queries:
  • ipcam-combined has both general and animal labels but seems to deliver slightly lower confidence percentages and have longer processing times than when using both models separately in the custom models box. Am I wrong and the differences are actually because they are different objects analyzed on different occasions?
  • ipcam-dark seems to have far higher confidence percentages and shorter processing times when both combined and dark models are running at the same time. Are the dark model images also in the combined model or only in the general model? I have included the dark model in the daytime profile to cover the uncertainty of dusk.
  • all the model pt file sizes are approx 15MB. Shouldn't the combined file size be a lot bigger than general+animal if the same images are used?

My "To confirm" box is empty. I notice that if I put "nothing found:0" in the "To cancel" box, this gets rid of all the nothing founds in my confirmed alerts list.
Am I wrong and the differences are actually because they are different objects analyzed on different occasions? The lower confidence percentages are because the more object added to detect to the model and the AI layers in the model is fixed.

Are the dark model images also in the combined model or only in the general model? At this time this only in the general mode. I am planning on updating all the models to include dark images

Shouldn't the combined file size be a lot bigger than general+animal if the same images are used? The models size is based on the number AI layers in the model and this is fixed
 
Mike, in a camera's AI settings, what would be reasonable figures to choose for
min confidence:
real time images:
analyze one each:

I am pleased to read that dark mode will be extended to ip-combined. currently in my BI profiles I use ipcam-combined during the day, and ip-general for my night profile setting.
Here in the UK the animals we encounter are foxes, cats, hedgehogs, squirrels, very occasionally rabbits and dogs, oh yes, I have to add in rats and mice. No horses, cows, bears or racoons have so far ventured onto our property in the UK.

I'll just add another note, sorry but I wouldn't pay for Code Project AI if it became a subscription service.
 
  • Like
Reactions: gwminor48
Mike, in a camera's AI settings, what would be reasonable figures to choose for
min confidence:
real time images:
analyze one each:

I am pleased to read that dark mode will be extended to ip-combined. currently in my BI profiles I use ipcam-combined during the day, and ip-general for my night profile setting.
Here in the UK the animals we encounter are foxes, cats, hedgehogs, squirrels, very occasionally rabbits and dogs, oh yes, I have to add in rats and mice. No horses, cows, bears or racoons have so far ventured onto our property in the UK.

I'll just add another note, sorry but I wouldn't pay for Code Project AI if it became a subscription service.
Below are my setting that I find work well
1664995129826.jpeg
 
Look who has been added to the CodeProject team
Hey Mike, now that you are part of the Code Project team I have a suggestion for the application. I have been running the CPU version on 15 camera since the beginning and have noticed there are times that the CPU on my i7-4790 gets hammered pretty hard. To help alleviate this I moved the CodeProject.AI process to an i7-6700T computer that I use for home automation (home assistant). This helped considerably, but it got me thinking about how i could load balance the process between two computers. BI has an option to override the server, but it doesn't work with custom models. Not sure what it would take, but it would be awesome if you could run the CodeProject.AI process on multiple machines with the benefits of custom models.

Here's a before and after graph of the CPU process on my two machines. If I could load balance between the two machines I could probably get the CPU process below 60%

Screen Shot 2022-10-05 at 4.45.06 PM.png
 
Hey Mike, now that you are part of the Code Project team I have a suggestion for the application. I have been running the CPU version on 15 camera since the beginning and have noticed there are times that the CPU on my i7-4790 gets hammered pretty hard. To help alleviate this I moved the CodeProject.AI process to an i7-6700T computer that I use for home automation (home assistant). This helped considerably, but it got me thinking about how i could load balance the process between two computers. BI has an option to override the server, but it doesn't work with custom models. Not sure what it would take, but it would be awesome if you could run the CodeProject.AI process on multiple machines with the benefits of custom models.

Here's a before and after graph of the CPU process on my two machines. If I could load balance between the two machines I could probably get the CPU process below 60%

View attachment 141677
You can run custom models on another computer. I use my desktop with a GTX 2060 to process whilst BI runs in a VM on a separate host. Using the ipcam-general model. But it would be cool to load balance. Probably a significant amount of work to get that going.
 
Yes, I think the CPU spikes are the reason than a number of us are really waiting for (and hoping it truly happens) Google Coral support. What's not to like about a $60 USB stick that could dramatically speed up detections with less than 5W of power consumption?
 
Last edited:
  • Like
Reactions: dirk6665
^^^^ Maybe because the name "Google" is involved?
 
^^^^ Maybe because the name "Google" is involved?
I suspect you meant that to be humorous, as this is one of Google's extremely rare products that truly seems to be as "local" or non-cloud connected as the whole CodeProject AI system. I can't say that I have personally audited the veracity of that (no Wireshark tests yet), but it's worth reading the info available here:
 
Hey Mike, now that you are part of the Code Project team I have a suggestion for the application. I have been running the CPU version on 15 camera since the beginning and have noticed there are times that the CPU on my i7-4790 gets hammered pretty hard. To help alleviate this I moved the CodeProject.AI process to an i7-6700T computer that I use for home automation (home assistant). This helped considerably, but it got me thinking about how i could load balance the process between two computers. BI has an option to override the server, but it doesn't work with custom models. Not sure what it would take, but it would be awesome if you could run the CodeProject.AI process on multiple machines with the benefits of custom models.

Here's a before and after graph of the CPU process on my two machines. If I could load balance between the two machines I could probably get the CPU process below 60%

View attachment 141677
loadbalancing could be done using a reverse proxy like nginx
 
No, that wasn't in humor. I don't trust anything Google or Amazon related that can get to the internet. Just because it's a USB stick doesn't eliminate the probability of firmware code that reports back to Google. If it could be securely firewalled, maybe, but in the mean time thanks but no thanks.
 
Yep, I'm using custom models on my 2nd machine. I want to be able to use custom models on multiple machines at the same time.
You can send AI requests from BI en mass via the global AI settings tab but for the purposes of load balancing, you can also do so on a per camera basis for standard or custom models. While it is not dynamic load balancing sort to say, you can send some cameras to one device/VM running AI and then send some other cameras to another device/VM running AI. It is an easy way to leverage multiple GPU's in a single box in my opinion. Be it in a docker container in windows or in a VM running the AI docker container in a hypervisor like ProxMox. You just setup one VM, clone it, pass through the appropriate GPU to allow for GPU acceleration, and you are on your way. CPU version would work this way as well I suppose.

BI global AI settings tab
1665065599987.png

Per camera AI settings tab
1665065555063.png