5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

I've been running 1.5.7-Beta+0002 on my production system (16 cameras) for a little over 24hrs. I replaced all the custom models with the yolov5l.pt model. I have found that this model is considerably slower but highly accurate. I'm seeing process times between 1-4 seconds on my i7-4790 CPU only with accuracy extremely high.

View attachment 140609
View attachment 140610
If you are only using yolov5l model give Object Detection (.NET) a try, it should have better accuracy and faster. To enable it in BI checkoff Default object detection and all others settings in BI should be just like if you are using DeepStack's default model

1663799900451.png
 
Yikes, I am surprised that you find that to be an acceptable analysis time. With times that slow I would be concerned that only one or two images would be analyzed before the (moving) object would be out of frame.

This is with what settings for image quality/resolution being passed to the AI?
With the exception of my LPR most of my cameras are set at 1920*1080, 15 FPS with a bit rate of 8192. My LPR camera is using 30 FPS during the day and 8 FPS at night. When a vehicle passes by my house (typically between 20 and 40 mph) it triggers a minimum of 4 cameras sometimes 6. I'm not sure how SenseAI works internally, but from the log files it appears to be queuing the images so from my observations it doesn't seem like I'm missing anything.
 
I stood up a "new" PC for the Beta (1.5.7-Beta+0002) testing by repurposing an old "gaming" PC I have.
Specs
i5 -4500 @3.30GHz
16GB RAM
GTX 1050
10 cameras

I am also using the YOLOv5l model on all cameras along with the delivery model on 1 camera. BI AI tab says I am averaging 792ms. My testing has also shown YOLOv5l to be very accurate
 
My production machine is running the latest release 1.5.6.2

i5-10400F@2.9GHz
16GB RAM
GTX 1050

It has the same BI settings but on and older version because I exported them to build my test rig. BI UI tab says 553,000 analyses at an average of 232ms and 61.51/min
 
Below is a comparison, Object Detection (.NET) using resolution mode high vs yolov5l using resolution mode high.

Object Detection (.NET)
# Label Confidence
0 person 83%
1 car 83%
2 truck 66%
3 traffic light 62%
4 person 57%
5 traffic light 55%
6 truck 52%
7 car 51%
8 traffic light 51%
9 truck 50%
10 truck 46%
11 truck 45%
12 car 45%
13 car 38%
14 person 35%
15 truck 32%
16 traffic light 29%
17 traffic light 28%
1663803846112.png

Object Detection (YOLOV5L)
# Label Confidence
0 person 83%
1 car 83%
2 truck 63%
3 car 59%
4 car 54%
5 traffic light 53%
6 truck 52%
7 traffic light 50%
8 person 49%
9 truck 48%
1663803653271.png
 
  • Like
Reactions: gwminor48
If you are only using yolov5l model give Object Detection (.NET) a try, it should have better accuracy and faster. To enable it in BI checkoff Default object detection and all others settings in BI should be just like if you are using DeepStack's default model

View attachment 140616


I just tried this except mine is greyed out on medium ( which should be even faster) and my detection times have gone up a bunch 4730ms average...up from ~780ms
 
Now I am not able to turn of any of the CodeAI modules. They keep turning themselves back on
 
Just got the updated docker version this morning. At first I had a few issues but after deleting the old "extra" parameters in the docker and resetting the config folder, it is working perfectly. I get between 30-40ms now (down from 50ms) using YOLO so its getting faster.
 
  • Like
Reactions: MikeLud1
Just got the updated docker version this morning. At first I had a few issues but after deleting the old "extra" parameters in the docker and resetting the config folder, it is working perfectly. I get between 30-40ms now (down from 50ms) using YOLO so its getting faster.
Are you on the (semi-private) beta test list? I am still seeing only 1.5.6.2 being offered "publicly".
 
Mine is currently updating via the docker pull, so for whatever that's worth.

I see a lot of people mentioning "YOLOV5L"

I can't seem to find much on the site about it/using it... can someone point me in the right direction so I can start reading about how to use it?
 
Mine is currently updating via the docker pull, so for whatever that's worth.

I see a lot of people mentioning "YOLOV5L"

I can't seem to find much on the site about it/using it... can someone point me in the right direction so I can start reading about how to use it?

YOLOv5 main GitHub page: GitHub - ultralytics/yolov5: YOLOv5 in PyTorch > ONNX > CoreML > TFLite
YOLOv5 release page: Releases · ultralytics/yolov5
Link to YOLOv5l model file:
Download the model file and place it in your custom models folder.
Be sure that you have enabled custom model folder and defined the path to your custom models folder correctly in the main AI settings


1663860575056.png

Then specify the yolov5l model on each camera you want to use it

1663860439449.png
 
Damn, that was fast.. thanks!

Being that I'm running SenseAI in a docker.. can I place it in my windows file structure (for BI) and point from there, or do I need to do something with docker as well?
 
Damn, that was fast.. thanks!

Being that I'm running SenseAI in a docker.. can I place it in my windows file structure (for BI) and point from there, or do I need to do something with docker as well?

I do not run the docker version so I really am not sure how that setup works.

I am sure that Mike will chime in here as many people are running docker and custom models
 
  • Like
Reactions: Perplexed
I'm trying YOLO5n found in the GitHub repository ... purported to be faster.

GitHub Ultralytics Releases

YOLOv5n should be fast, but at the cost of being less accurate

I believe you should run the larger models if your hardware can handle it

You can easily download all the YOLOv5 models and test them in the CodeAI web interface to see what you think of the results when run against the sample pics

For example, I tried the YOLOv5x model and found it to be considerably slower than YOLOv5l and the results were actually less desirable...it kept listing my Toyota Sienna as a train
 
well, I tried a local install of SenseAI as the docker started to frustrate me.

When attempting to do the custom model config under the BI AI tab, the "use custom model folder" is grayed out and I'm unable to check the box - is that anticipated behavior or do I have a problem?

I cannot check the box or specify the folder path to the custom model folder.