5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

main here...and the yolo custom models folder
 

Attachments

  • main.png
    main.png
    54 KB · Views: 98
  • yolo.png
    yolo.png
    32.5 KB · Views: 92
main here...and the yolo custom models folder
So far all looks good post the below screenshot.
One thing that I do not thinks should be an issue is that the Object Detection (YOLOv5 .NET) custom model folder only needs onnx extension files and the Object Detection (YOLOv5 6.2) custom model folder only needs PT extension files

1674057870775.png
 
So far all looks good post the below screenshot.
One thing that I do not thinks should be an issue is that the Object Detection (YOLOv5 .NET) custom model folder only needs onnx extension files and the Object Detection (YOLOv5 6.2) custom model folder only needs PT extension files

View attachment 151318
sending both the yolov5 6.2 version (that shows only yolo, not ipcam-combined I think) and the .net version, which looks like nothing is happening in the log?cpserver.pngnet server.pngserver log.png
 
So far all looks good post the below screenshot.
One thing that I do not thinks should be an issue is that the Object Detection (YOLOv5 .NET) custom model folder only needs onnx extension files and the Object Detection (YOLOv5 6.2) custom model folder only needs PT extension files

View attachment 151318
So far all looks good post the below screenshot.
One thing that I do not thinks should be an issue is that the Object Detection (YOLOv5 .NET) custom model folder only needs onnx extension files and the Object Detection (YOLOv5 6.2) custom model folder only needs PT extension files

View attachment 151318
OK, I can now get both the ObjectDetection (Yolov5 6.2) AND the .NET versions working with Mike's models! The trick, for me, was to not have any yolov5l models in the respective custom model folders. Just IPcam-combined.pt in the ObjectDetectionYolo\custom-models folder and IPCam-combined.onnx in the ObjectDetectionNet\custom-models folder. It seems having both Mikes and Yolov5l in the same custom folders makes CPAI use only the Yolo and ignore mikes? for me anyway. So all good here for now:)
 
  • Like
Reactions: MikeLud1
Can't get GPU mode to work on Intel UHD 630 GPU. No matter how many times I click enable GPU it displays CPU.
Shouldn't it work with the embedded Intel GPU's ?
 
Can't get GPU mode to work on Intel UHD 630 GPU. No matter how many times I click enable GPU it displays CPU.
Shouldn't it work with the embedded Intel GPU's ?
You need to use Object Detection (YOLOv5 .NET) module for Intel GPU support, make sure that Object Detection (YOLOv5 6.2) is disabled. One note currently custom models do not work with Blue Iris.

1674146706460.png
 
I have done that, and looking in the code to see if I can get any wiser about the problem. So far it seem that CPAI_MODULE_SUPPORT_GPU should be True, but its False.
What happens if you temporary disable the Remote Display Adapter and restart CodeProject.AI service, does it work then.
 
So I just tried the Yolov5.net and first time I did not have CUDA checked in the BI AI settings and I would get this:

Screenshot 2023-01-19 163944.pngScreenshot 2023-01-19 163801.png

Then If I checked use CUDA in the BI AI settings I would get this:

Screenshot 2023-01-19 164137.pngScreenshot 2023-01-19 164105.png

So do we need to have CUDA checked ? This is using a INTEL embedded 770 CPU. I do think it is working as it bumps up mu GPU usage.
 
So do we need to have CUDA checked ? This is using a INTEL embedded 770 CPU. I do think it is working as it bumps up mu GPU usage.
Thanks for testing. I can get GPU mode enabled by enabling CUDA in BI. But why is it controlled by BI and not the program itself ? Or both :)
In says GPU (DirectML) now, but don't see any GPU usage and response times are the same as using CPU.

So either its not working or the Intel GPU, in my case the Intel 630 UHD on a 6-core i5-8500T CPU, is not any faster than using the CPU mode.

@Tinman Do you see any difference in using CPU or the Intel GPU ? What kind of response times do you get ?
 
Thanks for testing. I can get GPU mode enabled by enabling CUDA in BI. But why is it controlled by BI and not the program itself ? Or both :)
In says GPU (DirectML) now, but don't see any GPU usage and response times are the same as using CPU.

So either its not working or the Intel GPU, in my case the Intel 630 UHD on a 6-core i5-8500T CPU, is not any faster than using the CPU mode.

@Tinman Do you see any difference in using CPU or the Intel GPU ? What kind of response times do you get ?

I'm pretty sure it was using my GPU according to my task manager, but the times were not much different. But I was using Mike's ip-combined custom model on CPU where in the .net mode it is using the default model, which I am thinking would yield a slower time, so it may be helping. Hopefully Once BI and CP get all the bugs out we will have more of a Idea. As far as the CUDA thing goes, I'm stumped on why that has to be checked ?? On my main BI machine I am still using CPU mode and it works VERY well for me, just wanted to see all our options.
 

Attachments

  • Screenshot 2023-01-19 182318.png
    Screenshot 2023-01-19 182318.png
    63.2 KB · Views: 33
  • Like
Reactions: ReXX
So decided to compare some detection times of a Intel HD 530 (I7-6700k) vs Intel 770 HD (I7-12000) using CPAI's Yolov5.NET in GPU DirectML mode.
 

Attachments

  • Screenshot 2023-01-20 095809.png
    Screenshot 2023-01-20 095809.png
    76.8 KB · Views: 113
  • Screenshot 2023-01-20 095423.png
    Screenshot 2023-01-20 095423.png
    61.9 KB · Views: 113