CodeProject.AI Version 2.0

I find Object Detection (YOLOv5 .NET) works the best. You can test it your self by using Explorer like the below screenshots. Just remember you can only have one Object Detection module enabled at a time.

View attachment 166205

View attachment 166206
Thanks, it's the same on my PC. .NET version with DirectML runs 1.5x faster than 6.2 with CUDA on the nvidia GPU. That's odd.
The accucary of inferences is similar. They are obviously different as order of low-level operations, etc should exactly meet to have the same result, that I highly doubt if could be possible as one is DirectML and the other is CUDA based..

However mine is in the 30 inferences/second range while yours is above 110. Do you run the .NET version on CPU or on GPU? If on CPU then what kind of CPU, if on GPU then what graphics card do you have?
 
Been out and in house all night testing different settings.
Found that the ipcam combined (custom) is better than ipcam dark at night!
Also reducing the snapshot quality to 50% increases the inference speed from 90 to 50ms in my case without a reduction in detection accuracy. Substream.
 
Been out and in house all night testing different settings.
Found that the ipcam combined (custom) is better than ipcam dark at night!
Also reducing the snapshot quality to 50% increases the inference speed from 90 to 50ms in my case without a reduction in detection accuracy. Substream.
I also had similar experience. ipcam combined classified more people in general.
 
  • Like
Reactions: Pentagano
1 / 323 = 3ms
Mine flats out around 24 inference/sec on pexels-huseyn-kamaladdin-667838.jpg .
That's more than 12 times slower than yours. Could you disclose your setup, settings or that you think the most differentiating factor?
I attached my BI settings.
 

Attachments

  • 1.png
    1.png
    127.4 KB · Views: 53
Mine flats out around 24 inference/sec on pexels-huseyn-kamaladdin-667838.jpg .
That's more than 12 times slower than yours. Could you disclose your setup, settings or that you think the most differentiating factor?
I attached my BI settings.
One setting you can change is uncheck Use main stream if available. This does not improve accuracy it only slows down the detection time.

1687784894697.png
 
One setting you can change is uncheck Use main stream if available. This does not improve accuracy it only slows down the detection time.

View attachment 166261
Thank you. Will try that out. I've set the second stream bitrates extremely low for now to work with slow mobile networks as well.
Will have to re-adjust them to normal range. Some of them are only VGA. But have some Dahua cameras that generate 1080p third stream that's very good for this purpose.
Also will try the speed_test out on an i7 6700 + GT 1030 later. I'm curious how much they can do in inference/s. My current setup seems to be sub-optimal at some point. :/
 
Last edited:
Thank you. Will try that out. I've set the second stream bitrates extremely low for now to work with slow mobile networks as well.
Will have to re-adjust them to normal range. Some of them are only VGA. But have some Dahua cameras that generate 1080p third stream that's very good for this purpose.
Also will try the speed_test out on an i7 6700 + GT 1030 later. I'm curious how much they can do in inference/s. My current setup seems to be sub-optimal at some point. :/
VGA is the optimal resolution because the AI models are trained for an image no more than 640 x 640 resolution.
 
  • Like
Reactions: David L and actran
One setting you can change is uncheck Use main stream if available. This does not improve accuracy it only slows down the detection time.

View attachment 166261
VGA is the optimal resolution because the AI models are trained for an image no more than 640 x 640 resolution.
That's a good point. I would be happier with 1080p. My cameras are external, looking at considerable area, where people can show up in various pixel sizes..
Do you think BI performs a resize to 640x640 for each source frame to the AI server?
 
That's a good point. I would be happier with 1080p. My cameras are external, looking at considerable area, where people can show up in various pixel sizes..
Do you think BI performs a resize to 640x640 for each source frame to the AI server?
I use image sizes of 768x432 with high definition ticked.
Also reduce the snapshot jpegs to 15% (tried 50 and 100% and just used up more space, slower analysis). Although I'm always playing around with settings to see if I can make small improvements. aiinput is the folder for AITool with Codeproject.

1687795286075.png
 
  • Like
Reactions: Gyula614
One setting you can change is uncheck Use main stream if available. This does not improve accuracy it only slows down the detection time.

View attachment 166261
I use image sizes of 768x432 with high definition ticked.
Also reduce the snapshot jpegs to 15% (tried 50 and 100% and just used up more space, slower analysis). Although I'm always playing around with settings to see if I can make small improvements. aiinput is the folder for AITool with Codeproject.

View attachment 166275
Thanks. Strange, that 'aiinput' is not amongst in my dropdown list.
 
Hello guys!
Recently I changed from Deepstack to Codeproject, because It finished to go..
So I try with the other AI

Well, atm I don´t know why but it is impossible to run it. I would wish if anyone could give me some help.

Well, my specs

Blueiris version 5.7.7.15
Codeproject 2.19 Beta
Intel Core 4770k
AMD Radeon RX 570 (I don´t know if AMD is compatible and only can process with cpu)

I put in front of the camera, and nothing, alerts 0, etc
Also obtained that error
12:20:46:Object Detection (YOLOv5 6.2): C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\assets\yolov5m.pt does not exist
12:20:46:Object Detection (YOLOv5 6.2): Unable to create YOLO detector for model yolov5m

Thanks in advanced
 

Attachments

  • ipcam1.png
    ipcam1.png
    25.2 KB · Views: 53
  • ipcam2.png
    ipcam2.png
    62.3 KB · Views: 56
  • ipcam3.png
    ipcam3.png
    64.5 KB · Views: 41
  • Like
Reactions: David L
Hello guys!
Recently I changed from Deepstack to Codeproject, because It finished to go..
So I try with the other AI

Well, atm I don´t know why but it is impossible to run it. I would wish if anyone could give me some help.

Well, my specs

Blueiris version 5.7.7.15
Codeproject 2.19 Beta
Intel Core 4770k
AMD Radeon RX 570 (I don´t know if AMD is compatible and only can process with cpu)

I put in front of the camera, and nothing, alerts 0, etc
Also obtained that error
12:20:46:Object Detection (YOLOv5 6.2): C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\assets\yolov5m.pt does not exist
12:20:46:Object Detection (YOLOv5 6.2): Unable to create YOLO detector for model yolov5m

Thanks in advanced
First you need to chose one, you have both the CPU and the GPU version running. I would try the GPU. Click on the 3 dots ... to disable one.
Second, when using custom models you need to uncheck Default object detection
Third, if you are using the GPU version you need the .onnx custom models for the .NET version
1687861926319.png

If you use the CPU version you will need the .pt custom models
1687862002202.png

Click on the 3 Dots ... on Use custom models: to see which models you have installed:
1687862574660.png

HTH
 
Last edited:
First you need to chose one, you have both the CPU and the GPU version running. I would try the GPU. Click on the 3 dots ... to disable one.
Second, when using custom models you need to uncheck Default object detection
Third, if you are using the GPU version you need the .onnx custom models for the .NET version
View attachment 166359

If you use the CPU version you will need the .pt custom models
View attachment 166360

Click on the 3 Dots ... on Use custom models: to see which models you have installed:
View attachment 166361

HTH
Thank you so much man. Tomorrow I will do those changes

i have those models installed, not delivery but the rest yes

could you tell me if my gpu is compatible?
Thanks again for share
 
Thank you so much man. Tomorrow I will do those changes

i have those models installed, not delivery but the rest yes

could you tell me if my gpu is compatible?
Thanks again for share
From what I understand only Nvidia are supported - give it a try anyway as you have it. I chose Nvidia specifically for deepstack and now use CP.
Since we are using CUDA 11.7+ (which has support for compute capability 3.7 and above).we can only support Nvidia CUDA cards that are equal to or better than a GK210 or Tesla K80 card. Please refer to this table of supported cards to determine if your card has compute capability 3.7 or above.

 
  • Like
Reactions: fjv and actran