CodeProject.AI Version 2.0

I have a similar result with my intel based bare-bone Windows 11 build @Pentagano .
.net lets me select GPU (built in intel gpu) but 6.2 bounces back to CPU only.
ah ok - must be a solution somewhere. Mine worked best on 6.2 before I virtualised windows. Mine now is AMD based proxmox.
Ran the Cudnn bat file thingy also. Followed the same installation steps as I did on my original windows.
 
I have a similar result with my intel based bare-bone Windows 11 build @Pentagano .
.net lets me select GPU (built in intel gpu) but 6.2 bounces back to CPU only.
It is my understanding that YOLOv5 6.2 does not support the integrated Intel GPU's as they are accessed differently. I think it is via DirectX or something along those lines. So you can only utilize the Intel GPU with the .NET module.
 
  • Like
Reactions: slidermike
Any ideas why 6.2 will not work with my gpu gtx970 now?
Works fine on the .NET. This is since I moved to a virtualised windows environment.
Am I missing some files/programs for the cuda to work?

GPU passthrough is fine

View attachment 170235
View attachment 170237

View attachment 170236
I can't say why exactly it is not working but there are many layers to successfully passing through a GPU into a virtual machine AND have it actually work. Getting it to show up is one thing but getting it to function properly is another, especially with Windows as the VM OS. It used to be that Nvidia would purposely prevent their consumer GPUs from working in a virtualize environment via something in the driver (that would include the GTX gpus). I managed to get it work in a VM but I am running a Proxmox server (bare metal Linux based hypervisor) and the VM is also a Linux OS running the official CPAI docker container (so I also had to contend with passing the GPU through to the container). Furthermore, my GPU is a workstation GPU (an older Quadro P600) and Nvidia does not prevent their workstation GPUs from working in a virtualized environment as I understand it. While I have not tested this myself, I believe you can have the VM in most hypervisors tell the VM OS that it is NOT a virtual machine and that might make help. That said, it can be an enormous undertaking to get everything working and I have not done it myself in Windows.

That all said, if .NET is working with the GPU (via DirectX probably), and you are getting 52 ms detection times, that is very good IMO and YOLOv5 6.2 probably won't improve that time. Detection accuracy between the models can vary of course.
 
nux based hype
I can't say why exactly it is not working but there are many layers to successfully passing through a GPU into a virtual machine AND have it actually work. Getting it to show up is one thing but getting it to function properly is another, especially with Windows as the VM OS. It used to be that Nvidia would purposely prevent their consumer GPUs from working in a virtualize environment via something in the driver (that would include the GTX gpus). I managed to get it work in a VM but I am running a Proxmox server (bare metal Linux based hypervisor) and the VM is also a Linux OS running the official CPAI docker container (so I also had to contend with passing the GPU through to the container). Furthermore, my GPU is a workstation GPU (an older Quadro P600) and Nvidia does not prevent their workstation GPUs from working in a virtualized environment as I understand it. While I have not tested this myself, I believe you can have the VM in most hypervisors tell the VM OS that it is NOT a virtual machine and that might make help. That said, it can be an enormous undertaking to get everything working and I have not done it myself in Windows.

That all said, if .NET is working with the GPU (via DirectX probably), and you are getting 52 ms detection times, that is very good IMO and YOLOv5 6.2 probably won't improve that time. Detection accuracy between the models can vary of course.
Mine is amd based (ryzen 4300g) with proxmox.
As mentioned YOLOv5 6.2 worked fine on the same cpu but windows direct.
yep at least it's working on .net but will look into it further
 
I keep seeing this over and over when running Coral TPU in Windows 10. I can't figure out why I keep seeing this. Anyone else have the issue?

03:42:34 bjectdetection_coral_adapter.py: Refreshing the Tensorflow Interpreter
03:42:34 bjectdetection_coral_adapter.py: Timeout connecting to the server
03:42:39 bjectdetection_coral_adapter.py: Timeout connecting to the server

05:56:39 bjectdetection_coral_adapter.py: Refreshing the Tensorflow Interpreter
05:56:39 bjectdetection_coral_adapter.py: Timeout connecting to the server
 
Hey, recently I also wondered where the real difference is... But strictly speaking, we're comparing two different models.

The Coral module is filled with unnecessary items for us, so it probably takes longer. Mike's optimized modules really focus on the essentials and naturally optimize the processing times.

With Mike's modules, my CPU takes about 300ms per iteration, while my Coral with the standard modules takes about 200 to 250ms.

If customized modules for the Coral come out, the times should significantly improve.

What do others think about this?

I'm sure Mike said modest improvement although equally I'm sure someone on the CPAI forums saw a claimed 7ms time.

The Edge should be faster as it's dual TPU so double the processing power (think RAID) and the method of communication may have some bearing. eg in theory at least, using M.2 should be quickest with CPU M.2 being the fastest, closely followed by chipset M.2. After that probably PCIE with USB the slowest. At least that would be my expectation. Whether reality lives up to that is another matter.

I'm going to be trying M.2 CPU as soon as I have the time to the try and install the software and give it a go been as no-one seems able to answer exactly what software needs to be installed with the CPAI model. ATM I'm assuming EdgeTPU Runtime which includes edgetpu runtime and Apex Drivers.

The items I'm not sure about are Tensorflow lite and PyCoral as I think the equivakents may be in the CPAI model download. Unsure though and mike doesn't seem to be around. Maybe on holiday?
 
Last edited:
I keep seeing this over and over when running Coral TPU in Windows 10. I can't figure out why I keep seeing this. Anyone else have the issue?

03:42:34 bjectdetection_coral_adapter.py: Refreshing the Tensorflow Interpreter
03:42:34 bjectdetection_coral_adapter.py: Timeout connecting to the server
03:42:39 bjectdetection_coral_adapter.py: Timeout connecting to the server

05:56:39 bjectdetection_coral_adapter.py: Refreshing the Tensorflow Interpreter
05:56:39 bjectdetection_coral_adapter.py: Timeout connecting to the server

Reported by others on the CPAI forum. Not seen a solution or cause mentioned yet.
 
Ordered a m.2 coral today. Should be here tomorrow hopefully.
I need to update BI and CPAI before I get started. I am going to try first moving my BI to a Windows 10 VM on unRAID. Should I run CPAI on the windows 10 VM or on docker via unRAID?

Anything else I should know about getting the coral setup with CPAI?
 
I'm back to running the CPU version atm. Had a few issues setting the Edge up and it seems a few people are complaining of issues over on the CPAI forums. For me analysis times are the same if not longer than with the CPU and some get missed because they get queued and time out. Obviously some issue somewhere as I'm pretty sure I saw processing times of 6ms somewhere albeit that wasn't with CCTV / BI.
 
I'm back to running the CPU version atm. Had a few issues setting the Edge up and it seems a few people are complaining of issues over on the CPAI forums. For me analysis times are the same if not longer than with the CPU and some get missed because they get queued and time out. Obviously some issue somewhere as I'm pretty sure I saw processing times of 6ms somewhere albeit that wasn't with CCTV / BI.
How was your edge set up?
 
anyone want to share their settings? I have an i5-6500. What module do I use and which models/settings do I use in BI? Any help would be appreciated.
 
As above not good. I reverted to CPU for now. Timings were little different to cpu and some got missed as they got queued and timed out. There are threads in CPAI forums about issues.

There is a specific model for use with Coral. That's the only 1 to use.
 
Ah gotcha. I meant settings for using my i5-6500 for the time being. It's been awhile since I updated BI/CPAI and seems what people use for settings changes a lot.
 
I can see when I get home tonight if no-one beats me to it. Not much to adjust is there? I know I run medium on model size and my sub steam is 1080P although I believe that something like 640 is recommended for faster processing.
 
I'd appreciate it. I know some people have shared pictures of their settings and which custom models to use.
 
I'm back to running the CPU version atm. Had a few issues setting the Edge up and it seems a few people are complaining of issues over on the CPAI forums. For me analysis times are the same if not longer than with the CPU and some get missed because they get queued and time out. Obviously some issue somewhere as I'm pretty sure I saw processing times of 6ms somewhere albeit that wasn't with CCTV / BI.

Is it possible to have multiple modules enabled? ie: Object Detection (YOLOv5 .NET) & ObjectDetection (Coral) ? How does CPAI/BlueIris behave when you do this?
 
I'm trying to get CPAI working fully on a debian docker container within proxmox. The service is running.
BUT when I try to analyse the images they just sit in the queue 'objectdetection_queue '..

Tried tweaking some settings like network from bridge to host etc.

08:40:08:Started Object Detection (YOLOv5 .NET) module
08:40:08:ObjectDetectionNet.dll: Application started. Press Ctrl+C to shut down.
08:40:08:ObjectDetectionNet.dll: Hosting environment: Production
08:40:08:ObjectDetectionNet.dll: Content root path: /app/preinstalled-modules/ObjectDetectionNet
08:40:09:ObjectDetectionNet.dll: Please ensure you don't enable this module along side any other Object Detection module using the 'vision/detection' route and 'objectdetection_queue' queue (eg. ObjectDetectionYolo). There will be conflicts
08:40:09:ObjectDetectionNet.dll: CodeProject.AI.Modules.ObjectDetection.Yolo_ObjectDetector[0]
08:40:09:Object Detection (YOLOv5 .NET): Object Detection (YOLOv5 .NET) module started.
08:40:11:Server: This is the latest version
08:40:11:Current Version is 2.1.11-Beta
08:40:40:Client request 'list-custom' in queue 'objectdetection_queue' (...bf51d5)
08:40:40:Client request 'list-custom' in queue 'objectdetection_queue' (...a3ea6f)
08:40:55:Client request 'custom' in queue 'objectdetection_queue' (...88ef32)
08:41:02:Client request 'detect' in queue 'objectdetection_queue' (...59f4f8)




08:40:08:Module 'Object Detection (YOLOv5 .NET)' (ID: ObjectDetectionNet)
08:40:08:Module Path: /app/preinstalled-modules/ObjectDetectionNet
08:40:08:AutoStart: True
08:40:08:Queue: objectdetection_queue
08:40:08:platforms: windows,linux,linux-arm64,macos,macos-arm64
08:40:08:GPU: Support enabled
08:40:08:parallelism: 0
08:40:08:Accelerator:
08:40:08:Half Precis.: enable
08:40:08:Runtime: dotnet
08:40:08:Runtime Loc: Shared
08:40:08:FilePath: ObjectDetectionNet.dll
08:40:08:pre installed: True
08:40:08:Start pause: 1 sec
08:40:08:LogVerbosity:
08:40:08:Valid: True
08:40:08:Environment Variables
08:40:08:CUSTOM_MODELS_DIR = %CURRENT_MODULE_PATH%\custom-models
08:40:08:MODELS_DIR = %CURRENT_MODULE_PATH%\assets
08:40:08:MODEL_SIZE = MEDIUM


Any ideas?


Update: Got it working on the Object Detection (YOLOv5 6.2) but not .net which is odd. Installed the .net etc.
ok well its working at least but looks like somelibraries are possibly missing for 5.2 net to work?

Thanks
 
Last edited:
I see in code project AI dashboard there is a module called
"
ObjectDetection (Coral)
1.3 2023-08-11
Available
Apache-2.0 The object detection module uses the Coral TPU to locate and classify the objects the models have been trained on.
"
Is this meant for Coral TPUs?.
1694094646245.png
 
As I understand it yes. Unfortunately there are issues with several users reporting problems. I'm sure it's all fixable, but some feedback as to what's happening investigation and fix wise would be nice.

For me there was little difference in detection times despite 2 x TPU units able to parallel process, with even 1 supposed to be quick, and instances of object detections going into the tens of seconds
(not milliseconds!). At the same time, object detections are sometimes being queued and timed out.

I also found some error messages on installation that suggest the installed components may not always be being installed correctly.

If I remember rightly, I've seen some examples of processing, although I can't remember if it was cctv related, from I think google, at 6ms with a single TPU. For comparison, I was seeing around 200ms with dual and some as high as 30,000ms+ and timing out and also causing queueing of other events triggers and discarding of them because the resource was too busy.

For me atm as I said above, I've reverted to CPU simply because I don't want to risk missed events due to timeouts.
 
Last edited:
  • Like
Reactions: David L