5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

My ai times are horrible with yolov5l (1500-2000ms), but it's so much more accurate than ip-combined. I'll just continue to use it this way.

To be honest, sensai is overall much slower than deepstack. I was averaging 40ms with the combined models on DS, and on sensai it's about 350ms. CUDA for both. I'm just too lazy to revert back.
My ai times are horrible with yolov5l (1500-2000ms), but it's so much more accurate than ip-combined. I'll just continue to use it this way.

To be honest, sensai is overall much slower than deepstack. I was averaging 40ms with the combined models on DS, and on sensai it's about 350ms. CUDA for both. I'm just too lazy to revert back.

What are you running it with? CPU? GPU (if so what kind)? What about the yolov5x, did you try it anyway?
 
My ai times are horrible with yolov5l (1500-2000ms), but it's so much more accurate than ip-combined. I'll just continue to use it this way.

To be honest, sensai is overall much slower than deepstack. I was averaging 40ms with the combined models on DS, and on sensai it's about 350ms. CUDA for both. I'm just too lazy to revert back.
You may want to try yolov5s, it lacks the accuracy of the 5l, and 5x models. But it still a leap forward to what the models mike had done. Hopefully he will be able to find the time to retrain his models with 5l and 5x. That would be a fast lean and accurate model(s)! (Fingers crossed!)

You can just move the 5s model over and not have to download it.

C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\assets
 
What are you running it with? CPU? GPU (if so what kind)? What about the yolov5x, did you try it anyway?

I'm running gpu with an old p400. It may be that sensai is catered toward newer CUDA gpus? My gpu time seems to be on par or worse than others' cpu times, but it is worth the tradeoff of not having my CPU pegged at 90%+ all day. I go through about 1000+ processed custom objects per hour, depending on time of day. That'd be enormous power usage with CPU.

I did try yolo5x first, and the response time is similar to their chart. It almost doubled from 5l.
 
What GPU do you have and what model were were you using before trying yolov5x and the detection times?
For yolov5x what detection time are you getting?

I might try retraining my models based on yolov5m yolov5l & yolov5x

Mike,
Could you shed some light on what these different settings for sense ai do? I get the resolution, but what are the resolution of images are fed to the AI models for low,med,high? I tried high with yolov5x and I was getting person detections of 70 to 80% of people inside their cars, behind the glare of their wind shield. It was awe inspiring, but my detection times dropped to 400 to 600ms. So, I went back to med. If I only had 2 or 3 cams, that would be my go-to choose.
Model size I am not sure I completely understand what those changes do, so I left it alone.

"MODEL_SIZE": "Medium", / small, medium, large, x-large
"RESOLUTION": "medium", / low, medium, high
 
I found this thread this morning and decided to give yolov5x a try. I was using ipcam-general. My times have gone from 50-300ms to 200-1500ms. The times very a lot and I really have no sense of the average, but it is clear that the times have increased quite a lot. As far as accuracy goes, I have no sense of that yet.

I am using a GTX1050 on a 6 core i5 rig with 16gb of RAM
 
The 5s models brought my response times back to status quo around 350-450ms. Thanks for the suggestion
 
  • Like
Reactions: TBurt
I found this thread this morning and decided to give yolov5x a try. I was using ipcam-general. My times have gone from 50-300ms to 200-1500ms. The times very a lot and I really have no sense of the average, but it is clear that the times have increased quite a lot. As far as accuracy goes, I have no sense of that yet.

I am using a GTX1050 on a 6 core i5 rig with 16gb of RAM
I base it on the blue iris AI status page where it has t_ave.
 
I found this thread this morning and decided to give yolov5x a try. I was using ipcam-general. My times have gone from 50-300ms to 200-1500ms. The times very a lot and I really have no sense of the average, but it is clear that the times have increased quite a lot. As far as accuracy goes, I have no sense of that yet.

I am using a GTX1050 on a 6 core i5 rig with 16gb of RAM
Did you try the yolov5l? It seems more constant time wise. (Like 60 to 80 ms) I am using 5l for more of my cameras, and 5x for my security threat zones, (front/rear door/my cars) 5l is accurate enough to actually have me use it on all my cameras, although the accuracy of 5x is hard to not use if your system has the horsepower to run it. I have noticed that yolov5x does swing a lot for me, like 60ms to 150ms or overage. Most of them are 80ish to 110 detection times. I found turning off the sense ai built in model helped a lot for some reason with my 5x times and resource use. You might try that if you have not already.
 
Did you try the yolov5l? It seems more constant time wise. (Like 60 to 80 ms) I am using 5l for more of my cameras, and 5x for my security threat zones, (front/rear door/my cars) 5l is accurate enough to actually have me use it on all my cameras, although the accuracy of 5x is hard to not use if your system has the horsepower to run it. I have noticed that yolov5x does swing a lot for me, like 60ms to 150ms or overage. Most of them are 80ish to 110 detection times. I found turning off the sense ai built in model helped a lot for some reason with my 5x times and resource use. You might try that if you have not already.

How do you turn of the built in sense ai model?
 
I'm also wondering if there's a "more accurate" setting for facial recognition. I noticed there's a Mode=MEDIUM, but I'm not sure what adjusting that actually does.
 
How do you turn of the built in sense ai model?

BI can do it from the AI page. Or you can edit the json file to turn it off for good. Change the object detection to false. Your custom object detections should still work. Try just turning it off inside BI/AI/uncheck default detection. Stop and restart the AI on that same page. Make sure it actually does it, as BI is not fully stable with the BI and Sense ai working together.

C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionNet\modulesettings.json

"Modules": {
"ObjectDetection": {
"Activate": false,
"Name": "Object Detection (.NET)",
"Description": "Detects multiple objects of 80 types in an image.",
"FilePath": "ObjectDetectionNet\\ObjectDetectionNet.exe",
"Runtime": "execute",
"Platforms": [ "windows"],
"EnableFlags": [ "VISION-DETECTION" ],
 
I keep wondering... why do I care if it takes 100ms or 1sec for the AI to process an alert? Isn't accuracy more important and how does the system taking a second or two to register an alert affect my security? Am I missing something?
 
I keep wondering... why do I care if it takes 100ms or 1sec for the AI to process an alert? Isn't accuracy more important and how does the system taking a second or two to register an alert affect my security? Am I missing something?
I noticed that face, face lite, face Rec high. That is my next thing to play with.
 
Time means how long the CPU or GPU is tied up processing. The longer the time, the longer the load. It's actually an efficiency metric.
 
I keep wondering... why do I care if it takes 100ms or 1sec for the AI to process an alert? Isn't accuracy more important and how does the system taking a second or two to register an alert affect my security? Am I missing something?

Going to have a super long queue if you process a lot of images per trigger. I do 20 images on the doorbell and 10 on the street. Sometimes cameras trigger at the same time within quick succession.
 
BI can do it from the AI page. Or you can edit the json file to turn it off for good. Change the object detection to false. Your custom object detections should still work. Try just turning it off inside BI/AI/uncheck default detection. Stop and restart the AI on that same page. Make sure it actually does it, as BI is not fully stable with the BI and Sense ai working together.

C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionNet\modulesettings.json

"Modules": {
"ObjectDetection": {
"Activate": false,
"Name": "Object Detection (.NET)",
"Description": "Detects multiple objects of 80 types in an image.",
"FilePath": "ObjectDetectionNet\\ObjectDetectionNet.exe",
"Runtime": "execute",
"Platforms": [ "windows"],
"EnableFlags": [ "VISION-DETECTION" ],


I already have default object detection unchecked in BI settings and I don't have BI start/stop AI due to issues I have had with that function

1661272329994.png
 
Time means how long the CPU or GPU is tied up processing. The longer the time, the longer the load. It's actually an efficiency metric.


I can understand that, I guess. I am using the GPU version and this how busy my GPU is:
1661272570151.png


I have seen it spike up as high as 25% briefly on occasion. So I don't feel like I am having an efficiency issue that warrants giving up having the best AI available